Jan 30 14:08:42.404156 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:08:42.404178 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:08:42.404186 kernel: KASLR enabled Jan 30 14:08:42.404192 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:08:42.404200 kernel: printk: bootconsole [pl11] enabled Jan 30 14:08:42.404205 kernel: efi: EFI v2.7 by EDK II Jan 30 14:08:42.404212 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:08:42.404219 kernel: random: crng init done Jan 30 14:08:42.404225 kernel: ACPI: Early table checksum verification disabled Jan 30 14:08:42.404231 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:08:42.404237 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404243 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404250 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:08:42.404256 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404264 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404270 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404277 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404285 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404291 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404298 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:08:42.404304 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404311 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:08:42.404317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:08:42.404323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:08:42.404330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:08:42.404336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:08:42.404342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:08:42.404349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:08:42.404356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:08:42.404363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:08:42.404369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:08:42.404376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:08:42.404382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:08:42.404388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:08:42.404394 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:08:42.404400 kernel: Zone ranges: Jan 30 14:08:42.404407 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:08:42.404413 kernel: DMA32 empty Jan 30 14:08:42.404419 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:08:42.404426 kernel: Movable zone start for each node Jan 30 14:08:42.404436 kernel: Early memory node ranges Jan 30 14:08:42.404443 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:08:42.404450 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:08:42.404457 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:08:42.404463 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:08:42.404471 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:08:42.404478 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:08:42.404484 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:08:42.404491 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:08:42.404498 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:08:42.404504 kernel: psci: probing for conduit method from ACPI. Jan 30 14:08:42.404511 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:08:42.404518 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:08:42.404524 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:08:42.404539 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:08:42.404546 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:08:42.404553 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:08:42.404561 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:08:42.404568 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:08:42.404575 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:08:42.404581 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:08:42.404588 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:08:42.404595 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:08:42.404601 kernel: CPU features: detected: Spectre-BHB Jan 30 14:08:42.404608 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:08:42.404615 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:08:42.404621 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:08:42.404628 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:08:42.404637 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:08:42.404643 kernel: alternatives: applying boot alternatives Jan 30 14:08:42.404651 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:08:42.404659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:08:42.404665 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:08:42.404672 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:08:42.404679 kernel: Fallback order for Node 0: 0 Jan 30 14:08:42.404686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:08:42.404692 kernel: Policy zone: Normal Jan 30 14:08:42.404699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:08:42.404706 kernel: software IO TLB: area num 2. Jan 30 14:08:42.404714 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:08:42.404721 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:08:42.404728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:08:42.404735 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:08:42.404742 kernel: rcu: RCU event tracing is enabled. Jan 30 14:08:42.404749 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:08:42.404756 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:08:42.404763 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:08:42.404770 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:08:42.404776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:08:42.404783 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:08:42.404791 kernel: GICv3: 960 SPIs implemented Jan 30 14:08:42.404798 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:08:42.404805 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:08:42.404811 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:08:42.404818 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:08:42.404825 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:08:42.404832 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:08:42.404839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:08:42.404845 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:08:42.404852 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:08:42.404859 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:08:42.404867 kernel: Console: colour dummy device 80x25 Jan 30 14:08:42.404874 kernel: printk: console [tty1] enabled Jan 30 14:08:42.404881 kernel: ACPI: Core revision 20230628 Jan 30 14:08:42.404888 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:08:42.404895 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:08:42.404902 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:08:42.404909 kernel: landlock: Up and running. Jan 30 14:08:42.404915 kernel: SELinux: Initializing. Jan 30 14:08:42.404922 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.404929 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.404938 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:08:42.404945 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:08:42.404952 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:08:42.404959 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:08:42.404966 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:08:42.404972 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:08:42.404980 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:08:42.404993 kernel: Remapping and enabling EFI services. Jan 30 14:08:42.405000 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:08:42.405007 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:08:42.405015 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:08:42.405023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:08:42.405030 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:08:42.405038 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:08:42.405045 kernel: SMP: Total of 2 processors activated. Jan 30 14:08:42.405052 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:08:42.405061 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:08:42.405068 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:08:42.405075 kernel: CPU features: detected: CRC32 instructions Jan 30 14:08:42.405083 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:08:42.405090 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:08:42.405097 kernel: CPU features: detected: Privileged Access Never Jan 30 14:08:42.405105 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:08:42.405112 kernel: alternatives: applying system-wide alternatives Jan 30 14:08:42.405119 kernel: devtmpfs: initialized Jan 30 14:08:42.405128 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:08:42.405135 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:08:42.405142 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:08:42.405149 kernel: SMBIOS 3.1.0 present. Jan 30 14:08:42.405157 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:08:42.405164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:08:42.405171 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:08:42.405179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:08:42.405186 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:08:42.405195 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:08:42.405202 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:08:42.405209 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:08:42.405217 kernel: cpuidle: using governor menu Jan 30 14:08:42.405224 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:08:42.405231 kernel: ASID allocator initialised with 32768 entries Jan 30 14:08:42.405238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:08:42.405245 kernel: Serial: AMBA PL011 UART driver Jan 30 14:08:42.405252 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:08:42.405261 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:08:42.405268 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:08:42.405275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:08:42.405283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:08:42.405290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:08:42.405297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:08:42.405304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:08:42.405312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:08:42.405319 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:08:42.405327 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:08:42.405335 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:08:42.405342 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:08:42.405349 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:08:42.405356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:08:42.405364 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:08:42.405371 kernel: ACPI: Interpreter enabled Jan 30 14:08:42.405378 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:08:42.405385 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:08:42.405394 kernel: printk: console [ttyAMA0] enabled Jan 30 14:08:42.405401 kernel: printk: bootconsole [pl11] disabled Jan 30 14:08:42.405409 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:08:42.405416 kernel: iommu: Default domain type: Translated Jan 30 14:08:42.405423 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:08:42.405431 kernel: efivars: Registered efivars operations Jan 30 14:08:42.405438 kernel: vgaarb: loaded Jan 30 14:08:42.405445 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:08:42.405452 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:08:42.405461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:08:42.405469 kernel: pnp: PnP ACPI init Jan 30 14:08:42.405476 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:08:42.405484 kernel: NET: Registered PF_INET protocol family Jan 30 14:08:42.405496 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:08:42.405504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:08:42.405511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:08:42.405519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:08:42.405526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:08:42.405541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:08:42.405548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.405556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.405563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:08:42.405570 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:08:42.405578 kernel: kvm [1]: HYP mode not available Jan 30 14:08:42.405585 kernel: Initialise system trusted keyrings Jan 30 14:08:42.405592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:08:42.405600 kernel: Key type asymmetric registered Jan 30 14:08:42.405608 kernel: Asymmetric key parser 'x509' registered Jan 30 14:08:42.405615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:08:42.405623 kernel: io scheduler mq-deadline registered Jan 30 14:08:42.405630 kernel: io scheduler kyber registered Jan 30 14:08:42.405637 kernel: io scheduler bfq registered Jan 30 14:08:42.405644 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:08:42.405651 kernel: thunder_xcv, ver 1.0 Jan 30 14:08:42.405658 kernel: thunder_bgx, ver 1.0 Jan 30 14:08:42.405666 kernel: nicpf, ver 1.0 Jan 30 14:08:42.405673 kernel: nicvf, ver 1.0 Jan 30 14:08:42.405813 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:08:42.405887 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:08:41 UTC (1738246121) Jan 30 14:08:42.405898 kernel: efifb: probing for efifb Jan 30 14:08:42.405905 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:08:42.405913 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:08:42.405920 kernel: efifb: scrolling: redraw Jan 30 14:08:42.405928 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:08:42.405938 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:08:42.405945 kernel: fb0: EFI VGA frame buffer device Jan 30 14:08:42.405952 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:08:42.405960 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:08:42.405967 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:08:42.405975 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:08:42.405982 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:08:42.405989 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:08:42.405997 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:08:42.406006 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:08:42.406013 kernel: Segment Routing with IPv6 Jan 30 14:08:42.406021 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:08:42.406028 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:08:42.406036 kernel: Key type dns_resolver registered Jan 30 14:08:42.406043 kernel: registered taskstats version 1 Jan 30 14:08:42.406050 kernel: Loading compiled-in X.509 certificates Jan 30 14:08:42.406057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:08:42.406065 kernel: Key type .fscrypt registered Jan 30 14:08:42.406074 kernel: Key type fscrypt-provisioning registered Jan 30 14:08:42.406081 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:08:42.406088 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:08:42.406096 kernel: ima: No architecture policies found Jan 30 14:08:42.406103 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:08:42.406110 kernel: clk: Disabling unused clocks Jan 30 14:08:42.406117 kernel: Freeing unused kernel memory: 39360K Jan 30 14:08:42.406125 kernel: Run /init as init process Jan 30 14:08:42.406132 kernel: with arguments: Jan 30 14:08:42.406141 kernel: /init Jan 30 14:08:42.406149 kernel: with environment: Jan 30 14:08:42.406156 kernel: HOME=/ Jan 30 14:08:42.406163 kernel: TERM=linux Jan 30 14:08:42.406171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:08:42.406180 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:08:42.406190 systemd[1]: Detected virtualization microsoft. Jan 30 14:08:42.406198 systemd[1]: Detected architecture arm64. Jan 30 14:08:42.406207 systemd[1]: Running in initrd. Jan 30 14:08:42.406215 systemd[1]: No hostname configured, using default hostname. Jan 30 14:08:42.406222 systemd[1]: Hostname set to . Jan 30 14:08:42.406231 systemd[1]: Initializing machine ID from random generator. Jan 30 14:08:42.406238 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:08:42.406246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:08:42.406255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:08:42.406263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:08:42.406273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:08:42.406281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:08:42.406289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:08:42.406298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:08:42.406307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:08:42.406314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:08:42.406324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:08:42.406332 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:08:42.406340 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:08:42.406348 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:08:42.406356 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:08:42.406364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:08:42.406372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:08:42.406380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:08:42.406388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:08:42.406398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:08:42.406406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:08:42.406414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:08:42.406422 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:08:42.406430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:08:42.406438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:08:42.406446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:08:42.406454 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:08:42.406462 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:08:42.406471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:08:42.406495 systemd-journald[217]: Collecting audit messages is disabled. Jan 30 14:08:42.406515 systemd-journald[217]: Journal started Jan 30 14:08:42.406547 systemd-journald[217]: Runtime Journal (/run/log/journal/a3cb1d5a1e0e4d2b83f89b08399d8c49) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:08:42.426477 systemd-modules-load[218]: Inserted module 'overlay' Jan 30 14:08:42.438612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:42.467897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:08:42.467993 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:08:42.468037 kernel: Bridge firewalling registered Jan 30 14:08:42.479453 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 30 14:08:42.480902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:08:42.489356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:08:42.500553 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:08:42.511654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:08:42.526521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:42.562436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:42.571862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:08:42.597772 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:08:42.623052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:08:42.639382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:42.655886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:08:42.678237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:08:42.691324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:08:42.727689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:08:42.740686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:08:42.748724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:08:42.777385 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:08:42.789496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:08:42.807032 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:08:42.854933 systemd-resolved[253]: Positive Trust Anchors: Jan 30 14:08:42.854949 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:08:42.854982 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:08:42.863080 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 30 14:08:42.864038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:08:42.892067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:08:42.969548 kernel: SCSI subsystem initialized Jan 30 14:08:42.978565 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:08:42.989594 kernel: iscsi: registered transport (tcp) Jan 30 14:08:43.005544 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:08:43.005563 kernel: QLogic iSCSI HBA Driver Jan 30 14:08:43.052262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:08:43.070903 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:08:43.108569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:08:43.108645 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:08:43.116404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:08:43.167556 kernel: raid6: neonx8 gen() 15766 MB/s Jan 30 14:08:43.187557 kernel: raid6: neonx4 gen() 15654 MB/s Jan 30 14:08:43.207557 kernel: raid6: neonx2 gen() 13286 MB/s Jan 30 14:08:43.228543 kernel: raid6: neonx1 gen() 10403 MB/s Jan 30 14:08:43.248544 kernel: raid6: int64x8 gen() 6812 MB/s Jan 30 14:08:43.268542 kernel: raid6: int64x4 gen() 7353 MB/s Jan 30 14:08:43.289543 kernel: raid6: int64x2 gen() 6114 MB/s Jan 30 14:08:43.312621 kernel: raid6: int64x1 gen() 5053 MB/s Jan 30 14:08:43.312651 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jan 30 14:08:43.339021 kernel: raid6: .... xor() 11917 MB/s, rmw enabled Jan 30 14:08:43.339092 kernel: raid6: using neon recovery algorithm Jan 30 14:08:43.350771 kernel: xor: measuring software checksum speed Jan 30 14:08:43.350792 kernel: 8regs : 19783 MB/sec Jan 30 14:08:43.354468 kernel: 32regs : 19641 MB/sec Jan 30 14:08:43.358102 kernel: arm64_neon : 26910 MB/sec Jan 30 14:08:43.362734 kernel: xor: using function: arm64_neon (26910 MB/sec) Jan 30 14:08:43.415555 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:08:43.428099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:08:43.445746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:08:43.469947 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 30 14:08:43.475706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:08:43.502689 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:08:43.521072 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 30 14:08:43.552438 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:08:43.567877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:08:43.611371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:08:43.629060 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:08:43.647619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:08:43.661057 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:08:43.676938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:08:43.692274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:08:43.711727 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:08:43.736585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:08:43.753758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:08:43.779749 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:08:43.779774 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:08:43.753919 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:43.803671 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:08:43.803696 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:08:43.773972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:43.843818 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:08:43.843846 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 14:08:43.787966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:43.887391 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:08:43.887415 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:08:43.887425 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 14:08:43.887435 kernel: scsi host0: storvsc_host_t Jan 30 14:08:43.893984 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:08:43.894878 kernel: scsi host1: storvsc_host_t Jan 30 14:08:43.895006 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:08:43.788156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:43.814658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.933594 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:08:43.933675 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: VF slot 1 added Jan 30 14:08:43.871996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.907693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:43.907822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:43.949507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.991203 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:08:43.991262 kernel: PTP clock support registered Jan 30 14:08:43.991272 kernel: hv_pci 6623247a-e315-4206-bc59-0f143e0861ec: PCI VMBus probing: Using version 0x10004 Jan 30 14:08:43.722701 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:08:43.733257 kernel: hv_pci 6623247a-e315-4206-bc59-0f143e0861ec: PCI host bridge to bus e315:00 Jan 30 14:08:43.733431 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:08:43.733464 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:08:43.733603 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:08:43.733690 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:08:43.733701 kernel: pci_bus e315:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:08:43.733800 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:08:43.733811 kernel: pci_bus e315:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:08:43.733891 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 14:08:43.733977 kernel: pci e315:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:08:43.734074 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:08:43.734082 kernel: pci e315:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:08:43.735503 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:08:43.735615 kernel: pci e315:00:02.0: enabling Extended Tags Jan 30 14:08:43.735699 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:08:43.735787 kernel: pci e315:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e315:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:08:43.735872 kernel: pci_bus e315:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:08:43.735955 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:43.735967 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 14:08:43.736052 kernel: pci e315:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:08:43.736179 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 14:08:43.741751 systemd-journald[217]: Time jumped backwards, rotating. Jan 30 14:08:43.741836 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:08:43.741846 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:08:44.001810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:44.022816 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:43.654924 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 30 14:08:43.763842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:43.814278 kernel: mlx5_core e315:00:02.0: enabling device (0000 -> 0002) Jan 30 14:08:44.041191 kernel: mlx5_core e315:00:02.0: firmware version: 16.30.1284 Jan 30 14:08:44.041357 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: VF registering: eth1 Jan 30 14:08:44.041458 kernel: mlx5_core e315:00:02.0 eth1: joined to eth0 Jan 30 14:08:44.041563 kernel: mlx5_core e315:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:08:44.052135 kernel: mlx5_core e315:00:02.0 enP58133s1: renamed from eth1 Jan 30 14:08:44.180669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:08:44.290254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (508) Jan 30 14:08:44.309390 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (504) Jan 30 14:08:44.315515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:08:44.335766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:08:44.352367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:08:44.359637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:08:44.393389 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:08:44.418135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:44.427150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:44.435119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:45.446127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:45.446698 disk-uuid[601]: The operation has completed successfully. Jan 30 14:08:45.512213 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:08:45.517022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:08:45.551263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:08:45.564310 sh[715]: Success Jan 30 14:08:45.593275 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:08:45.775655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:08:45.785281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:08:45.797168 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:08:45.833037 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:08:45.833081 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:45.841871 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:08:45.847082 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:08:45.851343 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:08:46.170047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:08:46.175177 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:08:46.198383 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:08:46.206034 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:08:46.236957 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:46.236982 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:46.247129 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:46.271197 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:46.289320 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:08:46.295126 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:46.302860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:08:46.317433 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:08:46.358169 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:08:46.376284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:08:46.407163 systemd-networkd[899]: lo: Link UP Jan 30 14:08:46.407178 systemd-networkd[899]: lo: Gained carrier Jan 30 14:08:46.409400 systemd-networkd[899]: Enumeration completed Jan 30 14:08:46.410307 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:46.410310 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:08:46.414431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:08:46.421652 systemd[1]: Reached target network.target - Network. Jan 30 14:08:46.485145 kernel: mlx5_core e315:00:02.0 enP58133s1: Link up Jan 30 14:08:46.525134 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: Data path switched to VF: enP58133s1 Jan 30 14:08:46.525378 systemd-networkd[899]: enP58133s1: Link UP Jan 30 14:08:46.525494 systemd-networkd[899]: eth0: Link UP Jan 30 14:08:46.525590 systemd-networkd[899]: eth0: Gained carrier Jan 30 14:08:46.525599 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:46.543257 systemd-networkd[899]: enP58133s1: Gained carrier Jan 30 14:08:46.566154 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:08:47.238439 ignition[852]: Ignition 2.19.0 Jan 30 14:08:47.238450 ignition[852]: Stage: fetch-offline Jan 30 14:08:47.242302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:08:47.238493 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.255406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:08:47.238501 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.238611 ignition[852]: parsed url from cmdline: "" Jan 30 14:08:47.238615 ignition[852]: no config URL provided Jan 30 14:08:47.238619 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:08:47.238626 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:08:47.238631 ignition[852]: failed to fetch config: resource requires networking Jan 30 14:08:47.238810 ignition[852]: Ignition finished successfully Jan 30 14:08:47.280913 ignition[908]: Ignition 2.19.0 Jan 30 14:08:47.280920 ignition[908]: Stage: fetch Jan 30 14:08:47.281120 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.281130 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.281222 ignition[908]: parsed url from cmdline: "" Jan 30 14:08:47.281225 ignition[908]: no config URL provided Jan 30 14:08:47.281230 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:08:47.281237 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:08:47.281263 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:08:47.405712 ignition[908]: GET result: OK Jan 30 14:08:47.405817 ignition[908]: config has been read from IMDS userdata Jan 30 14:08:47.405859 ignition[908]: parsing config with SHA512: ad2889aeebd0981a96268d510fcfa6f0528f20bfd146d7b150d8c6a564ede4be148a4101d203ad435857f87d4a3c2443c9c9b083ea08dce85c6fca06978136da Jan 30 14:08:47.410937 unknown[908]: fetched base config from "system" Jan 30 14:08:47.411731 ignition[908]: fetch: fetch complete Jan 30 14:08:47.410945 unknown[908]: fetched base config from "system" Jan 30 14:08:47.411737 ignition[908]: fetch: fetch passed Jan 30 14:08:47.411162 unknown[908]: fetched user config from "azure" Jan 30 14:08:47.411822 ignition[908]: Ignition finished successfully Jan 30 14:08:47.414081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:08:47.451649 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:08:47.485849 ignition[915]: Ignition 2.19.0 Jan 30 14:08:47.485862 ignition[915]: Stage: kargs Jan 30 14:08:47.486067 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.494476 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:08:47.486135 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.487307 ignition[915]: kargs: kargs passed Jan 30 14:08:47.518362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:08:47.487375 ignition[915]: Ignition finished successfully Jan 30 14:08:47.546354 ignition[921]: Ignition 2.19.0 Jan 30 14:08:47.546375 ignition[921]: Stage: disks Jan 30 14:08:47.552291 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:08:47.546573 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.561082 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:08:47.546585 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.574357 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:08:47.547628 ignition[921]: disks: disks passed Jan 30 14:08:47.587244 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:08:47.547680 ignition[921]: Ignition finished successfully Jan 30 14:08:47.599854 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:08:47.612748 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:08:47.636413 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:08:47.724736 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:08:47.737595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:08:47.763348 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:08:47.825691 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:08:47.826125 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:08:47.836258 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:08:47.885197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:08:47.896760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:08:47.911355 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:08:47.939355 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jan 30 14:08:47.939384 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:47.939395 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:47.924574 systemd-networkd[899]: eth0: Gained IPv6LL Jan 30 14:08:47.962305 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:47.932937 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:08:47.932975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:08:47.959009 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:08:48.009484 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:47.997436 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:08:48.018422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:08:48.359243 systemd-networkd[899]: enP58133s1: Gained IPv6LL Jan 30 14:08:48.370778 coreos-metadata[942]: Jan 30 14:08:48.370 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:08:48.381303 coreos-metadata[942]: Jan 30 14:08:48.374 INFO Fetch successful Jan 30 14:08:48.381303 coreos-metadata[942]: Jan 30 14:08:48.374 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:08:48.399719 coreos-metadata[942]: Jan 30 14:08:48.387 INFO Fetch successful Jan 30 14:08:48.405125 coreos-metadata[942]: Jan 30 14:08:48.402 INFO wrote hostname ci-4081.3.0-a-554d7cc729 to /sysroot/etc/hostname Jan 30 14:08:48.406049 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:08:48.670874 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:08:48.725345 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:08:48.734552 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:08:48.744320 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:08:49.677198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:08:49.696631 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:08:49.709359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:08:49.727273 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:08:49.740601 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:49.767149 ignition[1057]: INFO : Ignition 2.19.0 Jan 30 14:08:49.767149 ignition[1057]: INFO : Stage: mount Jan 30 14:08:49.767149 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:49.767149 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:49.802544 ignition[1057]: INFO : mount: mount passed Jan 30 14:08:49.802544 ignition[1057]: INFO : Ignition finished successfully Jan 30 14:08:49.768313 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:08:49.777890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:08:49.807368 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:08:49.830368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:08:49.859659 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1070) Jan 30 14:08:49.859680 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:49.874125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:49.879023 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:49.887113 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:49.889612 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:08:49.921782 ignition[1087]: INFO : Ignition 2.19.0 Jan 30 14:08:49.921782 ignition[1087]: INFO : Stage: files Jan 30 14:08:49.929993 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:49.929993 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:49.929993 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:08:49.957929 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:08:49.957929 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:08:50.016387 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:08:50.023954 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:08:50.023954 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:08:50.016802 unknown[1087]: wrote ssh authorized keys file for user: core Jan 30 14:08:50.048101 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:08:50.060372 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:08:50.092754 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:08:50.193004 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:08:50.193004 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:08:50.648903 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:08:50.999821 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:51.013469 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: files passed Jan 30 14:08:51.094027 ignition[1087]: INFO : Ignition finished successfully Jan 30 14:08:51.044546 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:08:51.094450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:08:51.114414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:08:51.145506 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:08:51.202375 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.202375 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.145620 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:08:51.239798 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.171287 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:08:51.179933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:08:51.214312 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:08:51.275513 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:08:51.277145 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:08:51.290599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:08:51.304968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:08:51.317403 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:08:51.337430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:08:51.364175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:08:51.384478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:08:51.405876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:08:51.421169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:08:51.429492 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:08:51.441926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:08:51.442010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:08:51.460565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:08:51.473917 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:08:51.486942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:08:51.498946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:08:51.512603 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:08:51.527600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:08:51.540777 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:08:51.556673 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:08:51.570580 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:08:51.582762 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:08:51.593705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:08:51.593791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:08:51.613733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:08:51.621172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:08:51.638438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:08:51.652978 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:08:51.661126 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:08:51.661217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:08:51.683782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:08:51.683850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:08:51.699264 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:08:51.699326 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:08:51.713186 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:08:51.713252 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:08:51.793066 ignition[1139]: INFO : Ignition 2.19.0 Jan 30 14:08:51.793066 ignition[1139]: INFO : Stage: umount Jan 30 14:08:51.793066 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:51.793066 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:51.793066 ignition[1139]: INFO : umount: umount passed Jan 30 14:08:51.793066 ignition[1139]: INFO : Ignition finished successfully Jan 30 14:08:51.749338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:08:51.763355 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:08:51.763519 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:08:51.802282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:08:51.812342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:08:51.812429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:08:51.829811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:08:51.829887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:08:51.846395 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:08:51.846967 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:08:51.847116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:08:51.865644 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:08:51.865803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:08:51.875201 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:08:51.875353 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:08:51.887371 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:08:51.888200 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:08:51.899857 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:08:51.899915 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:08:51.914904 systemd[1]: Stopped target network.target - Network. Jan 30 14:08:51.927038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:08:51.927138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:08:51.944486 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:08:51.957351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:08:51.964265 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:08:51.973991 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:08:51.987145 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:08:51.998204 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:08:51.998262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:08:52.011983 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:08:52.012049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:08:52.026510 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:08:52.026588 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:08:52.032842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:08:52.032890 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:08:52.045541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:08:52.063943 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:08:52.089153 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 30 14:08:52.095542 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:08:52.095783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:08:52.381103 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: Data path switched from VF: enP58133s1 Jan 30 14:08:52.105916 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:08:52.106021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:08:52.119250 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:08:52.119335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:08:52.162303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:08:52.170786 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:08:52.170871 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:08:52.180459 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:08:52.180527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:08:52.196692 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:08:52.196775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:08:52.216142 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:08:52.216254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:08:52.230848 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:08:52.277784 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:08:52.279331 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:08:52.296849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:08:52.296956 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:08:52.310564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:08:52.310624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:08:52.325573 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:08:52.325725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:08:52.360724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:08:52.360814 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:08:52.381166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:08:52.381235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:52.415392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:08:52.437738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:08:52.437867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:08:52.453892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:52.453959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:52.472720 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:08:52.472855 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:08:52.485672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:08:52.700247 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 30 14:08:52.485763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:08:52.568741 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:08:52.568898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:08:52.577124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:08:52.588915 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:08:52.588991 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:08:52.612428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:08:52.632058 systemd[1]: Switching root. Jan 30 14:08:52.714154 systemd-journald[217]: Journal stopped Jan 30 14:08:42.404156 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:08:42.404178 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:08:42.404186 kernel: KASLR enabled Jan 30 14:08:42.404192 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 30 14:08:42.404200 kernel: printk: bootconsole [pl11] enabled Jan 30 14:08:42.404205 kernel: efi: EFI v2.7 by EDK II Jan 30 14:08:42.404212 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 30 14:08:42.404219 kernel: random: crng init done Jan 30 14:08:42.404225 kernel: ACPI: Early table checksum verification disabled Jan 30 14:08:42.404231 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 30 14:08:42.404237 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404243 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404250 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 30 14:08:42.404256 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404264 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404270 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404277 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404285 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404291 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404298 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 30 14:08:42.404304 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 14:08:42.404311 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 30 14:08:42.404317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 30 14:08:42.404323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 30 14:08:42.404330 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 30 14:08:42.404336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 30 14:08:42.404342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 30 14:08:42.404349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 30 14:08:42.404356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 30 14:08:42.404363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 30 14:08:42.404369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 30 14:08:42.404376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 30 14:08:42.404382 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 30 14:08:42.404388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 30 14:08:42.404394 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 30 14:08:42.404400 kernel: Zone ranges: Jan 30 14:08:42.404407 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 30 14:08:42.404413 kernel: DMA32 empty Jan 30 14:08:42.404419 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:08:42.404426 kernel: Movable zone start for each node Jan 30 14:08:42.404436 kernel: Early memory node ranges Jan 30 14:08:42.404443 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 30 14:08:42.404450 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 30 14:08:42.404457 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 30 14:08:42.404463 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 30 14:08:42.404471 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 30 14:08:42.404478 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 30 14:08:42.404484 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 30 14:08:42.404491 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 30 14:08:42.404498 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 30 14:08:42.404504 kernel: psci: probing for conduit method from ACPI. Jan 30 14:08:42.404511 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:08:42.404518 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:08:42.404524 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 30 14:08:42.404539 kernel: psci: SMC Calling Convention v1.4 Jan 30 14:08:42.404546 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 30 14:08:42.404553 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 30 14:08:42.404561 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:08:42.404568 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:08:42.404575 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:08:42.404581 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:08:42.404588 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:08:42.404595 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:08:42.404601 kernel: CPU features: detected: Spectre-BHB Jan 30 14:08:42.404608 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:08:42.404615 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:08:42.404621 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:08:42.404628 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 30 14:08:42.404637 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:08:42.404643 kernel: alternatives: applying boot alternatives Jan 30 14:08:42.404651 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:08:42.404659 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:08:42.404665 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:08:42.404672 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:08:42.404679 kernel: Fallback order for Node 0: 0 Jan 30 14:08:42.404686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 30 14:08:42.404692 kernel: Policy zone: Normal Jan 30 14:08:42.404699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:08:42.404706 kernel: software IO TLB: area num 2. Jan 30 14:08:42.404714 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 30 14:08:42.404721 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Jan 30 14:08:42.404728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:08:42.404735 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:08:42.404742 kernel: rcu: RCU event tracing is enabled. Jan 30 14:08:42.404749 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:08:42.404756 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:08:42.404763 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:08:42.404770 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:08:42.404776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:08:42.404783 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:08:42.404791 kernel: GICv3: 960 SPIs implemented Jan 30 14:08:42.404798 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:08:42.404805 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:08:42.404811 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:08:42.404818 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 30 14:08:42.404825 kernel: ITS: No ITS available, not enabling LPIs Jan 30 14:08:42.404832 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:08:42.404839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:08:42.404845 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:08:42.404852 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:08:42.404859 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:08:42.404867 kernel: Console: colour dummy device 80x25 Jan 30 14:08:42.404874 kernel: printk: console [tty1] enabled Jan 30 14:08:42.404881 kernel: ACPI: Core revision 20230628 Jan 30 14:08:42.404888 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:08:42.404895 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:08:42.404902 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:08:42.404909 kernel: landlock: Up and running. Jan 30 14:08:42.404915 kernel: SELinux: Initializing. Jan 30 14:08:42.404922 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.404929 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.404938 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:08:42.404945 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:08:42.404952 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 30 14:08:42.404959 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 30 14:08:42.404966 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 14:08:42.404972 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:08:42.404980 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:08:42.404993 kernel: Remapping and enabling EFI services. Jan 30 14:08:42.405000 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:08:42.405007 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:08:42.405015 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 30 14:08:42.405023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:08:42.405030 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:08:42.405038 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:08:42.405045 kernel: SMP: Total of 2 processors activated. Jan 30 14:08:42.405052 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:08:42.405061 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 30 14:08:42.405068 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:08:42.405075 kernel: CPU features: detected: CRC32 instructions Jan 30 14:08:42.405083 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:08:42.405090 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:08:42.405097 kernel: CPU features: detected: Privileged Access Never Jan 30 14:08:42.405105 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:08:42.405112 kernel: alternatives: applying system-wide alternatives Jan 30 14:08:42.405119 kernel: devtmpfs: initialized Jan 30 14:08:42.405128 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:08:42.405135 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:08:42.405142 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:08:42.405149 kernel: SMBIOS 3.1.0 present. Jan 30 14:08:42.405157 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 30 14:08:42.405164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:08:42.405171 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:08:42.405179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:08:42.405186 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:08:42.405195 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:08:42.405202 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 30 14:08:42.405209 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:08:42.405217 kernel: cpuidle: using governor menu Jan 30 14:08:42.405224 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:08:42.405231 kernel: ASID allocator initialised with 32768 entries Jan 30 14:08:42.405238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:08:42.405245 kernel: Serial: AMBA PL011 UART driver Jan 30 14:08:42.405252 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:08:42.405261 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:08:42.405268 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:08:42.405275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:08:42.405283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:08:42.405290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:08:42.405297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:08:42.405304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:08:42.405312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:08:42.405319 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:08:42.405327 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:08:42.405335 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:08:42.405342 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:08:42.405349 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:08:42.405356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:08:42.405364 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:08:42.405371 kernel: ACPI: Interpreter enabled Jan 30 14:08:42.405378 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:08:42.405385 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:08:42.405394 kernel: printk: console [ttyAMA0] enabled Jan 30 14:08:42.405401 kernel: printk: bootconsole [pl11] disabled Jan 30 14:08:42.405409 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 30 14:08:42.405416 kernel: iommu: Default domain type: Translated Jan 30 14:08:42.405423 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:08:42.405431 kernel: efivars: Registered efivars operations Jan 30 14:08:42.405438 kernel: vgaarb: loaded Jan 30 14:08:42.405445 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:08:42.405452 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:08:42.405461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:08:42.405469 kernel: pnp: PnP ACPI init Jan 30 14:08:42.405476 kernel: pnp: PnP ACPI: found 0 devices Jan 30 14:08:42.405484 kernel: NET: Registered PF_INET protocol family Jan 30 14:08:42.405496 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:08:42.405504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:08:42.405511 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:08:42.405519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:08:42.405526 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:08:42.405541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:08:42.405548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.405556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:08:42.405563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:08:42.405570 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:08:42.405578 kernel: kvm [1]: HYP mode not available Jan 30 14:08:42.405585 kernel: Initialise system trusted keyrings Jan 30 14:08:42.405592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:08:42.405600 kernel: Key type asymmetric registered Jan 30 14:08:42.405608 kernel: Asymmetric key parser 'x509' registered Jan 30 14:08:42.405615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:08:42.405623 kernel: io scheduler mq-deadline registered Jan 30 14:08:42.405630 kernel: io scheduler kyber registered Jan 30 14:08:42.405637 kernel: io scheduler bfq registered Jan 30 14:08:42.405644 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:08:42.405651 kernel: thunder_xcv, ver 1.0 Jan 30 14:08:42.405658 kernel: thunder_bgx, ver 1.0 Jan 30 14:08:42.405666 kernel: nicpf, ver 1.0 Jan 30 14:08:42.405673 kernel: nicvf, ver 1.0 Jan 30 14:08:42.405813 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:08:42.405887 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:08:41 UTC (1738246121) Jan 30 14:08:42.405898 kernel: efifb: probing for efifb Jan 30 14:08:42.405905 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 14:08:42.405913 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 14:08:42.405920 kernel: efifb: scrolling: redraw Jan 30 14:08:42.405928 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:08:42.405938 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:08:42.405945 kernel: fb0: EFI VGA frame buffer device Jan 30 14:08:42.405952 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 30 14:08:42.405960 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:08:42.405967 kernel: No ACPI PMU IRQ for CPU0 Jan 30 14:08:42.405975 kernel: No ACPI PMU IRQ for CPU1 Jan 30 14:08:42.405982 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 30 14:08:42.405989 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:08:42.405997 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:08:42.406006 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:08:42.406013 kernel: Segment Routing with IPv6 Jan 30 14:08:42.406021 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:08:42.406028 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:08:42.406036 kernel: Key type dns_resolver registered Jan 30 14:08:42.406043 kernel: registered taskstats version 1 Jan 30 14:08:42.406050 kernel: Loading compiled-in X.509 certificates Jan 30 14:08:42.406057 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:08:42.406065 kernel: Key type .fscrypt registered Jan 30 14:08:42.406074 kernel: Key type fscrypt-provisioning registered Jan 30 14:08:42.406081 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:08:42.406088 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:08:42.406096 kernel: ima: No architecture policies found Jan 30 14:08:42.406103 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:08:42.406110 kernel: clk: Disabling unused clocks Jan 30 14:08:42.406117 kernel: Freeing unused kernel memory: 39360K Jan 30 14:08:42.406125 kernel: Run /init as init process Jan 30 14:08:42.406132 kernel: with arguments: Jan 30 14:08:42.406141 kernel: /init Jan 30 14:08:42.406149 kernel: with environment: Jan 30 14:08:42.406156 kernel: HOME=/ Jan 30 14:08:42.406163 kernel: TERM=linux Jan 30 14:08:42.406171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:08:42.406180 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:08:42.406190 systemd[1]: Detected virtualization microsoft. Jan 30 14:08:42.406198 systemd[1]: Detected architecture arm64. Jan 30 14:08:42.406207 systemd[1]: Running in initrd. Jan 30 14:08:42.406215 systemd[1]: No hostname configured, using default hostname. Jan 30 14:08:42.406222 systemd[1]: Hostname set to . Jan 30 14:08:42.406231 systemd[1]: Initializing machine ID from random generator. Jan 30 14:08:42.406238 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:08:42.406246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:08:42.406255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:08:42.406263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:08:42.406273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:08:42.406281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:08:42.406289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:08:42.406298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:08:42.406307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:08:42.406314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:08:42.406324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:08:42.406332 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:08:42.406340 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:08:42.406348 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:08:42.406356 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:08:42.406364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:08:42.406372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:08:42.406380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:08:42.406388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:08:42.406398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:08:42.406406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:08:42.406414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:08:42.406422 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:08:42.406430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:08:42.406438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:08:42.406446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:08:42.406454 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:08:42.406462 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:08:42.406471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:08:42.406495 systemd-journald[217]: Collecting audit messages is disabled. Jan 30 14:08:42.406515 systemd-journald[217]: Journal started Jan 30 14:08:42.406547 systemd-journald[217]: Runtime Journal (/run/log/journal/a3cb1d5a1e0e4d2b83f89b08399d8c49) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:08:42.426477 systemd-modules-load[218]: Inserted module 'overlay' Jan 30 14:08:42.438612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:42.467897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:08:42.467993 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:08:42.468037 kernel: Bridge firewalling registered Jan 30 14:08:42.479453 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 30 14:08:42.480902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:08:42.489356 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:08:42.500553 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:08:42.511654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:08:42.526521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:42.562436 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:42.571862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:08:42.597772 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:08:42.623052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:08:42.639382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:42.655886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:08:42.678237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:08:42.691324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:08:42.727689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:08:42.740686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:08:42.748724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:08:42.777385 dracut-cmdline[250]: dracut-dracut-053 Jan 30 14:08:42.789496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:08:42.807032 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:08:42.854933 systemd-resolved[253]: Positive Trust Anchors: Jan 30 14:08:42.854949 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:08:42.854982 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:08:42.863080 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 30 14:08:42.864038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:08:42.892067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:08:42.969548 kernel: SCSI subsystem initialized Jan 30 14:08:42.978565 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:08:42.989594 kernel: iscsi: registered transport (tcp) Jan 30 14:08:43.005544 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:08:43.005563 kernel: QLogic iSCSI HBA Driver Jan 30 14:08:43.052262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:08:43.070903 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:08:43.108569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:08:43.108645 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:08:43.116404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:08:43.167556 kernel: raid6: neonx8 gen() 15766 MB/s Jan 30 14:08:43.187557 kernel: raid6: neonx4 gen() 15654 MB/s Jan 30 14:08:43.207557 kernel: raid6: neonx2 gen() 13286 MB/s Jan 30 14:08:43.228543 kernel: raid6: neonx1 gen() 10403 MB/s Jan 30 14:08:43.248544 kernel: raid6: int64x8 gen() 6812 MB/s Jan 30 14:08:43.268542 kernel: raid6: int64x4 gen() 7353 MB/s Jan 30 14:08:43.289543 kernel: raid6: int64x2 gen() 6114 MB/s Jan 30 14:08:43.312621 kernel: raid6: int64x1 gen() 5053 MB/s Jan 30 14:08:43.312651 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jan 30 14:08:43.339021 kernel: raid6: .... xor() 11917 MB/s, rmw enabled Jan 30 14:08:43.339092 kernel: raid6: using neon recovery algorithm Jan 30 14:08:43.350771 kernel: xor: measuring software checksum speed Jan 30 14:08:43.350792 kernel: 8regs : 19783 MB/sec Jan 30 14:08:43.354468 kernel: 32regs : 19641 MB/sec Jan 30 14:08:43.358102 kernel: arm64_neon : 26910 MB/sec Jan 30 14:08:43.362734 kernel: xor: using function: arm64_neon (26910 MB/sec) Jan 30 14:08:43.415555 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:08:43.428099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:08:43.445746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:08:43.469947 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 30 14:08:43.475706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:08:43.502689 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:08:43.521072 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 30 14:08:43.552438 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:08:43.567877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:08:43.611371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:08:43.629060 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:08:43.647619 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:08:43.661057 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:08:43.676938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:08:43.692274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:08:43.711727 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:08:43.736585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:08:43.753758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:08:43.779749 kernel: hv_vmbus: Vmbus version:5.3 Jan 30 14:08:43.779774 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 14:08:43.753919 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:43.803671 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 14:08:43.803696 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 14:08:43.773972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:43.843818 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 14:08:43.843846 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 14:08:43.787966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:43.887391 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 14:08:43.887415 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 14:08:43.887425 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 14:08:43.887435 kernel: scsi host0: storvsc_host_t Jan 30 14:08:43.893984 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 14:08:43.894878 kernel: scsi host1: storvsc_host_t Jan 30 14:08:43.895006 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 14:08:43.788156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:43.814658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.933594 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 14:08:43.933675 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: VF slot 1 added Jan 30 14:08:43.871996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.907693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:43.907822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:43.949507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:43.991203 kernel: hv_vmbus: registering driver hv_pci Jan 30 14:08:43.991262 kernel: PTP clock support registered Jan 30 14:08:43.991272 kernel: hv_pci 6623247a-e315-4206-bc59-0f143e0861ec: PCI VMBus probing: Using version 0x10004 Jan 30 14:08:43.722701 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 14:08:43.733257 kernel: hv_pci 6623247a-e315-4206-bc59-0f143e0861ec: PCI host bridge to bus e315:00 Jan 30 14:08:43.733431 kernel: hv_vmbus: registering driver hv_utils Jan 30 14:08:43.733464 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 14:08:43.733603 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 14:08:43.733690 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 14:08:43.733701 kernel: pci_bus e315:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 30 14:08:43.733800 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 14:08:43.733811 kernel: pci_bus e315:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 14:08:43.733891 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 14:08:43.733977 kernel: pci e315:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 30 14:08:43.734074 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 14:08:43.734082 kernel: pci e315:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:08:43.735503 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 14:08:43.735615 kernel: pci e315:00:02.0: enabling Extended Tags Jan 30 14:08:43.735699 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 14:08:43.735787 kernel: pci e315:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e315:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 30 14:08:43.735872 kernel: pci_bus e315:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 14:08:43.735955 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:43.735967 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 14:08:43.736052 kernel: pci e315:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 30 14:08:43.736179 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 14:08:43.741751 systemd-journald[217]: Time jumped backwards, rotating. Jan 30 14:08:43.741836 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:08:43.741846 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 14:08:44.001810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:44.022816 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:08:43.654924 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 30 14:08:43.763842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:43.814278 kernel: mlx5_core e315:00:02.0: enabling device (0000 -> 0002) Jan 30 14:08:44.041191 kernel: mlx5_core e315:00:02.0: firmware version: 16.30.1284 Jan 30 14:08:44.041357 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: VF registering: eth1 Jan 30 14:08:44.041458 kernel: mlx5_core e315:00:02.0 eth1: joined to eth0 Jan 30 14:08:44.041563 kernel: mlx5_core e315:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 30 14:08:44.052135 kernel: mlx5_core e315:00:02.0 enP58133s1: renamed from eth1 Jan 30 14:08:44.180669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 14:08:44.290254 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (508) Jan 30 14:08:44.309390 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (504) Jan 30 14:08:44.315515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:08:44.335766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 14:08:44.352367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 14:08:44.359637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 14:08:44.393389 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:08:44.418135 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:44.427150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:44.435119 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:45.446127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:08:45.446698 disk-uuid[601]: The operation has completed successfully. Jan 30 14:08:45.512213 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:08:45.517022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:08:45.551263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:08:45.564310 sh[715]: Success Jan 30 14:08:45.593275 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:08:45.775655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:08:45.785281 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:08:45.797168 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:08:45.833037 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:08:45.833081 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:45.841871 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:08:45.847082 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:08:45.851343 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:08:46.170047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:08:46.175177 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:08:46.198383 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:08:46.206034 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:08:46.236957 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:46.236982 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:46.247129 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:46.271197 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:46.289320 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:08:46.295126 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:46.302860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:08:46.317433 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:08:46.358169 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:08:46.376284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:08:46.407163 systemd-networkd[899]: lo: Link UP Jan 30 14:08:46.407178 systemd-networkd[899]: lo: Gained carrier Jan 30 14:08:46.409400 systemd-networkd[899]: Enumeration completed Jan 30 14:08:46.410307 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:46.410310 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:08:46.414431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:08:46.421652 systemd[1]: Reached target network.target - Network. Jan 30 14:08:46.485145 kernel: mlx5_core e315:00:02.0 enP58133s1: Link up Jan 30 14:08:46.525134 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: Data path switched to VF: enP58133s1 Jan 30 14:08:46.525378 systemd-networkd[899]: enP58133s1: Link UP Jan 30 14:08:46.525494 systemd-networkd[899]: eth0: Link UP Jan 30 14:08:46.525590 systemd-networkd[899]: eth0: Gained carrier Jan 30 14:08:46.525599 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:46.543257 systemd-networkd[899]: enP58133s1: Gained carrier Jan 30 14:08:46.566154 systemd-networkd[899]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:08:47.238439 ignition[852]: Ignition 2.19.0 Jan 30 14:08:47.238450 ignition[852]: Stage: fetch-offline Jan 30 14:08:47.242302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:08:47.238493 ignition[852]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.255406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:08:47.238501 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.238611 ignition[852]: parsed url from cmdline: "" Jan 30 14:08:47.238615 ignition[852]: no config URL provided Jan 30 14:08:47.238619 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:08:47.238626 ignition[852]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:08:47.238631 ignition[852]: failed to fetch config: resource requires networking Jan 30 14:08:47.238810 ignition[852]: Ignition finished successfully Jan 30 14:08:47.280913 ignition[908]: Ignition 2.19.0 Jan 30 14:08:47.280920 ignition[908]: Stage: fetch Jan 30 14:08:47.281120 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.281130 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.281222 ignition[908]: parsed url from cmdline: "" Jan 30 14:08:47.281225 ignition[908]: no config URL provided Jan 30 14:08:47.281230 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:08:47.281237 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:08:47.281263 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 14:08:47.405712 ignition[908]: GET result: OK Jan 30 14:08:47.405817 ignition[908]: config has been read from IMDS userdata Jan 30 14:08:47.405859 ignition[908]: parsing config with SHA512: ad2889aeebd0981a96268d510fcfa6f0528f20bfd146d7b150d8c6a564ede4be148a4101d203ad435857f87d4a3c2443c9c9b083ea08dce85c6fca06978136da Jan 30 14:08:47.410937 unknown[908]: fetched base config from "system" Jan 30 14:08:47.411731 ignition[908]: fetch: fetch complete Jan 30 14:08:47.410945 unknown[908]: fetched base config from "system" Jan 30 14:08:47.411737 ignition[908]: fetch: fetch passed Jan 30 14:08:47.411162 unknown[908]: fetched user config from "azure" Jan 30 14:08:47.411822 ignition[908]: Ignition finished successfully Jan 30 14:08:47.414081 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:08:47.451649 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:08:47.485849 ignition[915]: Ignition 2.19.0 Jan 30 14:08:47.485862 ignition[915]: Stage: kargs Jan 30 14:08:47.486067 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.494476 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:08:47.486135 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.487307 ignition[915]: kargs: kargs passed Jan 30 14:08:47.518362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:08:47.487375 ignition[915]: Ignition finished successfully Jan 30 14:08:47.546354 ignition[921]: Ignition 2.19.0 Jan 30 14:08:47.546375 ignition[921]: Stage: disks Jan 30 14:08:47.552291 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:08:47.546573 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:47.561082 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:08:47.546585 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:47.574357 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:08:47.547628 ignition[921]: disks: disks passed Jan 30 14:08:47.587244 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:08:47.547680 ignition[921]: Ignition finished successfully Jan 30 14:08:47.599854 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:08:47.612748 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:08:47.636413 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:08:47.724736 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 14:08:47.737595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:08:47.763348 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:08:47.825691 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:08:47.826125 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:08:47.836258 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:08:47.885197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:08:47.896760 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:08:47.911355 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:08:47.939355 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jan 30 14:08:47.939384 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:47.939395 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:47.924574 systemd-networkd[899]: eth0: Gained IPv6LL Jan 30 14:08:47.962305 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:47.932937 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:08:47.932975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:08:47.959009 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:08:48.009484 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:47.997436 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:08:48.018422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:08:48.359243 systemd-networkd[899]: enP58133s1: Gained IPv6LL Jan 30 14:08:48.370778 coreos-metadata[942]: Jan 30 14:08:48.370 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:08:48.381303 coreos-metadata[942]: Jan 30 14:08:48.374 INFO Fetch successful Jan 30 14:08:48.381303 coreos-metadata[942]: Jan 30 14:08:48.374 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:08:48.399719 coreos-metadata[942]: Jan 30 14:08:48.387 INFO Fetch successful Jan 30 14:08:48.405125 coreos-metadata[942]: Jan 30 14:08:48.402 INFO wrote hostname ci-4081.3.0-a-554d7cc729 to /sysroot/etc/hostname Jan 30 14:08:48.406049 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:08:48.670874 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:08:48.725345 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:08:48.734552 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:08:48.744320 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:08:49.677198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:08:49.696631 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:08:49.709359 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:08:49.727273 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:08:49.740601 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:49.767149 ignition[1057]: INFO : Ignition 2.19.0 Jan 30 14:08:49.767149 ignition[1057]: INFO : Stage: mount Jan 30 14:08:49.767149 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:49.767149 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:49.802544 ignition[1057]: INFO : mount: mount passed Jan 30 14:08:49.802544 ignition[1057]: INFO : Ignition finished successfully Jan 30 14:08:49.768313 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:08:49.777890 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:08:49.807368 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:08:49.830368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:08:49.859659 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1070) Jan 30 14:08:49.859680 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:08:49.874125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:08:49.879023 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:08:49.887113 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:08:49.889612 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:08:49.921782 ignition[1087]: INFO : Ignition 2.19.0 Jan 30 14:08:49.921782 ignition[1087]: INFO : Stage: files Jan 30 14:08:49.929993 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:49.929993 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:49.929993 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:08:49.957929 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:08:49.957929 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:08:50.016387 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:08:50.023954 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:08:50.023954 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:08:50.016802 unknown[1087]: wrote ssh authorized keys file for user: core Jan 30 14:08:50.048101 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:08:50.060372 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:08:50.092754 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:08:50.193004 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:08:50.193004 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:50.215213 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:08:50.648903 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:08:50.999821 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:08:51.013469 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:08:51.029167 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:08:51.094027 ignition[1087]: INFO : files: files passed Jan 30 14:08:51.094027 ignition[1087]: INFO : Ignition finished successfully Jan 30 14:08:51.044546 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:08:51.094450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:08:51.114414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:08:51.145506 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:08:51.202375 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.202375 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.145620 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:08:51.239798 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:08:51.171287 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:08:51.179933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:08:51.214312 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:08:51.275513 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:08:51.277145 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:08:51.290599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:08:51.304968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:08:51.317403 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:08:51.337430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:08:51.364175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:08:51.384478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:08:51.405876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:08:51.421169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:08:51.429492 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:08:51.441926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:08:51.442010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:08:51.460565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:08:51.473917 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:08:51.486942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:08:51.498946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:08:51.512603 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:08:51.527600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:08:51.540777 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:08:51.556673 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:08:51.570580 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:08:51.582762 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:08:51.593705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:08:51.593791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:08:51.613733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:08:51.621172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:08:51.638438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:08:51.652978 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:08:51.661126 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:08:51.661217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:08:51.683782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:08:51.683850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:08:51.699264 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:08:51.699326 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:08:51.713186 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:08:51.713252 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:08:51.793066 ignition[1139]: INFO : Ignition 2.19.0 Jan 30 14:08:51.793066 ignition[1139]: INFO : Stage: umount Jan 30 14:08:51.793066 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:08:51.793066 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 14:08:51.793066 ignition[1139]: INFO : umount: umount passed Jan 30 14:08:51.793066 ignition[1139]: INFO : Ignition finished successfully Jan 30 14:08:51.749338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:08:51.763355 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:08:51.763519 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:08:51.802282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:08:51.812342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:08:51.812429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:08:51.829811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:08:51.829887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:08:51.846395 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:08:51.846967 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:08:51.847116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:08:51.865644 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:08:51.865803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:08:51.875201 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:08:51.875353 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:08:51.887371 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:08:51.888200 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:08:51.899857 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:08:51.899915 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:08:51.914904 systemd[1]: Stopped target network.target - Network. Jan 30 14:08:51.927038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:08:51.927138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:08:51.944486 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:08:51.957351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:08:51.964265 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:08:51.973991 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:08:51.987145 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:08:51.998204 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:08:51.998262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:08:52.011983 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:08:52.012049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:08:52.026510 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:08:52.026588 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:08:52.032842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:08:52.032890 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:08:52.045541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:08:52.063943 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:08:52.089153 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 30 14:08:52.095542 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:08:52.095783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:08:52.381103 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: Data path switched from VF: enP58133s1 Jan 30 14:08:52.105916 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:08:52.106021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:08:52.119250 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:08:52.119335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:08:52.162303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:08:52.170786 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:08:52.170871 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:08:52.180459 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:08:52.180527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:08:52.196692 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:08:52.196775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:08:52.216142 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:08:52.216254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:08:52.230848 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:08:52.277784 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:08:52.279331 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:08:52.296849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:08:52.296956 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:08:52.310564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:08:52.310624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:08:52.325573 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:08:52.325725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:08:52.360724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:08:52.360814 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:08:52.381166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:08:52.381235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:08:52.415392 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:08:52.437738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:08:52.437867 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:08:52.453892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:08:52.453959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:08:52.472720 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:08:52.472855 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:08:52.485672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:08:52.700247 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 30 14:08:52.485763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:08:52.568741 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:08:52.568898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:08:52.577124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:08:52.588915 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:08:52.588991 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:08:52.612428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:08:52.632058 systemd[1]: Switching root. Jan 30 14:08:52.714154 systemd-journald[217]: Journal stopped Jan 30 14:08:56.678599 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:08:56.678624 kernel: SELinux: policy capability open_perms=1 Jan 30 14:08:56.678635 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:08:56.678643 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:08:56.678653 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:08:56.678660 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:08:56.678669 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:08:56.678677 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:08:56.678685 kernel: audit: type=1403 audit(1738246133.631:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:08:56.678695 systemd[1]: Successfully loaded SELinux policy in 110.836ms. Jan 30 14:08:56.678706 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.423ms. Jan 30 14:08:56.678716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:08:56.678728 systemd[1]: Detected virtualization microsoft. Jan 30 14:08:56.678736 systemd[1]: Detected architecture arm64. Jan 30 14:08:56.678745 systemd[1]: Detected first boot. Jan 30 14:08:56.678756 systemd[1]: Hostname set to . Jan 30 14:08:56.678765 systemd[1]: Initializing machine ID from random generator. Jan 30 14:08:56.678774 zram_generator::config[1181]: No configuration found. Jan 30 14:08:56.678784 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:08:56.678793 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:08:56.678802 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:08:56.678811 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:08:56.678822 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:08:56.678831 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:08:56.678841 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:08:56.678850 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:08:56.678860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:08:56.678869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:08:56.678879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:08:56.678889 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:08:56.678898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:08:56.678908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:08:56.678917 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:08:56.678928 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:08:56.678937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:08:56.678947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:08:56.678956 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:08:56.678966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:08:56.678976 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:08:56.678985 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:08:56.678997 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:08:56.679007 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:08:56.679016 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:08:56.679026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:08:56.679035 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:08:56.679046 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:08:56.679056 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:08:56.679065 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:08:56.679075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:08:56.679084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:08:56.679110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:08:56.679123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:08:56.679133 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:08:56.679142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:08:56.679154 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:08:56.679163 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:08:56.679173 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:08:56.679183 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:08:56.679195 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:08:56.679205 systemd[1]: Reached target machines.target - Containers. Jan 30 14:08:56.679215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:08:56.679224 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:08:56.679234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:08:56.679244 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:08:56.679253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:08:56.679263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:08:56.679274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:08:56.679284 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:08:56.679293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:08:56.679304 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:08:56.679314 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:08:56.679324 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:08:56.679333 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:08:56.679343 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:08:56.679353 kernel: fuse: init (API version 7.39) Jan 30 14:08:56.679363 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:08:56.679373 kernel: ACPI: bus type drm_connector registered Jan 30 14:08:56.679381 kernel: loop: module loaded Jan 30 14:08:56.679406 systemd-journald[1284]: Collecting audit messages is disabled. Jan 30 14:08:56.679430 systemd-journald[1284]: Journal started Jan 30 14:08:56.679451 systemd-journald[1284]: Runtime Journal (/run/log/journal/ad926782c5ff48e2b65f4f37dad0e51f) is 8.0M, max 78.5M, 70.5M free. Jan 30 14:08:55.592628 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:08:55.725595 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:08:55.726004 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:08:55.726420 systemd[1]: systemd-journald.service: Consumed 3.649s CPU time. Jan 30 14:08:56.699036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:08:56.718290 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:08:56.736553 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:08:56.759138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:08:56.775918 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:08:56.776046 systemd[1]: Stopped verity-setup.service. Jan 30 14:08:56.797156 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:08:56.797064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:08:56.803573 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:08:56.812901 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:08:56.822719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:08:56.830330 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:08:56.839546 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:08:56.846942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:08:56.855032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:08:56.863004 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:08:56.863209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:08:56.872738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:08:56.872893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:08:56.880531 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:08:56.880706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:08:56.889881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:08:56.890027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:08:56.897520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:08:56.897682 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:08:56.904165 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:08:56.904325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:08:56.910837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:08:56.918876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:08:56.928936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:08:56.938000 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:08:56.958707 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:08:56.973284 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:08:56.984303 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:08:56.990567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:08:56.990614 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:08:56.998131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:08:57.006536 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:08:57.016018 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:08:57.022416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:08:57.047507 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:08:57.055595 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:08:57.063417 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:08:57.067853 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:08:57.082275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:08:57.084251 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:08:57.095459 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:08:57.118301 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:08:57.135350 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:08:57.158718 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:08:57.159208 systemd-journald[1284]: Time spent on flushing to /var/log/journal/ad926782c5ff48e2b65f4f37dad0e51f is 94.299ms for 899 entries. Jan 30 14:08:57.159208 systemd-journald[1284]: System Journal (/var/log/journal/ad926782c5ff48e2b65f4f37dad0e51f) is 11.8M, max 2.6G, 2.6G free. Jan 30 14:08:57.340334 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 14:08:57.340389 systemd-journald[1284]: Received client request to flush runtime journal. Jan 30 14:08:57.340431 systemd-journald[1284]: /var/log/journal/ad926782c5ff48e2b65f4f37dad0e51f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 30 14:08:57.340503 systemd-journald[1284]: Rotating system journal. Jan 30 14:08:57.340528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:08:57.340545 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 14:08:57.176798 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:08:57.193633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:08:57.231478 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:08:57.256808 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:08:57.279575 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:08:57.293505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:08:57.303429 udevadm[1318]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:08:57.342368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:08:57.355519 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:08:57.356259 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:08:57.474141 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:08:57.500441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:08:57.535718 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jan 30 14:08:57.535739 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Jan 30 14:08:57.540333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:08:57.724129 kernel: loop2: detected capacity change from 0 to 31320 Jan 30 14:08:58.074269 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 14:08:58.416432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:08:58.426432 kernel: loop4: detected capacity change from 0 to 194096 Jan 30 14:08:58.437138 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 14:08:58.438490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:08:58.455130 kernel: loop6: detected capacity change from 0 to 31320 Jan 30 14:08:58.463596 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Jan 30 14:08:58.465160 kernel: loop7: detected capacity change from 0 to 114328 Jan 30 14:08:58.469284 (sd-merge)[1344]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 14:08:58.469797 (sd-merge)[1344]: Merged extensions into '/usr'. Jan 30 14:08:58.478796 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:08:58.478814 systemd[1]: Reloading... Jan 30 14:08:58.561218 zram_generator::config[1368]: No configuration found. Jan 30 14:08:58.705751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:08:58.765393 systemd[1]: Reloading finished in 286 ms. Jan 30 14:08:58.795762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:08:58.810601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:08:58.841861 systemd[1]: Starting ensure-sysext.service... Jan 30 14:08:58.854172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:08:58.870652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:08:58.881700 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 14:08:58.915618 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:08:58.931198 systemd[1]: Reloading requested from client PID 1448 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:08:58.931223 systemd[1]: Reloading... Jan 30 14:08:58.969779 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:08:58.970114 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:08:58.970887 systemd-tmpfiles[1450]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:08:58.973500 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Jan 30 14:08:58.973717 systemd-tmpfiles[1450]: ACLs are not supported, ignoring. Jan 30 14:08:58.983848 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:08:58.984288 systemd-tmpfiles[1450]: Skipping /boot Jan 30 14:08:59.001289 systemd-tmpfiles[1450]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:08:59.001304 systemd-tmpfiles[1450]: Skipping /boot Jan 30 14:08:59.073135 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:08:59.115514 zram_generator::config[1499]: No configuration found. Jan 30 14:08:59.186322 kernel: hv_vmbus: registering driver hv_balloon Jan 30 14:08:59.186513 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 14:08:59.186574 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 14:08:59.206133 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 14:08:59.206262 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 30 14:08:59.206287 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 14:08:59.215131 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:08:59.230550 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:08:59.293949 systemd-networkd[1449]: lo: Link UP Jan 30 14:08:59.295223 systemd-networkd[1449]: lo: Gained carrier Jan 30 14:08:59.299616 systemd-networkd[1449]: Enumeration completed Jan 30 14:08:59.301443 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:59.301666 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:08:59.350594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:08:59.358178 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1435) Jan 30 14:08:59.386184 kernel: mlx5_core e315:00:02.0 enP58133s1: Link up Jan 30 14:08:59.426132 kernel: hv_netvsc 002248bf-e115-0022-48bf-e115002248bf eth0: Data path switched to VF: enP58133s1 Jan 30 14:08:59.426980 systemd-networkd[1449]: enP58133s1: Link UP Jan 30 14:08:59.427083 systemd-networkd[1449]: eth0: Link UP Jan 30 14:08:59.427086 systemd-networkd[1449]: eth0: Gained carrier Jan 30 14:08:59.427661 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:08:59.433513 systemd-networkd[1449]: enP58133s1: Gained carrier Jan 30 14:08:59.444176 systemd-networkd[1449]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:08:59.452079 systemd[1]: Reloading finished in 520 ms. Jan 30 14:08:59.466823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:08:59.475185 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:08:59.489668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:08:59.520136 systemd[1]: Finished ensure-sysext.service. Jan 30 14:08:59.545186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 14:08:59.561549 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:08:59.576421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:08:59.583754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:08:59.587839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:08:59.599715 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:08:59.616872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:08:59.637741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:08:59.648338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:08:59.651620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:08:59.664495 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:08:59.685482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:08:59.709423 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:08:59.721263 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:08:59.743511 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:08:59.760307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:08:59.777868 augenrules[1623]: No rules Jan 30 14:08:59.779196 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:08:59.792797 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:08:59.800286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:08:59.800440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:08:59.811000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:08:59.811399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:08:59.824690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:08:59.824917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:08:59.835708 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:08:59.835876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:08:59.845323 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:08:59.855337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:08:59.880236 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:08:59.903539 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:08:59.915341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:08:59.915485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:08:59.984965 systemd-resolved[1618]: Positive Trust Anchors: Jan 30 14:08:59.984992 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:08:59.985024 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:09:00.001448 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:09:00.007301 systemd-resolved[1618]: Using system hostname 'ci-4081.3.0-a-554d7cc729'. Jan 30 14:09:00.012953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:09:00.022873 systemd[1]: Reached target network.target - Network. Jan 30 14:09:00.031413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:09:00.046173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:09:00.053633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:09:00.068486 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:09:00.082566 lvm[1644]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:09:00.091462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:09:00.111201 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:09:00.290626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:09:00.302241 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:09:00.582322 systemd-networkd[1449]: enP58133s1: Gained IPv6LL Jan 30 14:09:00.838319 systemd-networkd[1449]: eth0: Gained IPv6LL Jan 30 14:09:00.842157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:09:00.851234 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:09:04.736446 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:09:04.770765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:09:04.784345 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:09:04.802859 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:09:04.809903 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:09:04.818487 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:09:04.827064 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:09:04.835040 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:09:04.841620 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:09:04.850037 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:09:04.858157 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:09:04.858217 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:09:04.865578 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:09:05.080290 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:09:05.090993 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:09:05.101603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:09:05.109731 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:09:05.117298 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:09:05.122535 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:09:05.127983 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:09:05.128017 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:09:05.135207 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 14:09:05.144326 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:09:05.160064 (chronyd)[1655]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 14:09:05.161500 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:09:05.188210 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:09:05.196292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:09:05.217735 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:09:05.223702 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:09:05.223744 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 14:09:05.225697 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 14:09:05.231512 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 14:09:05.235303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:09:05.238051 KVP[1663]: KVP starting; pid is:1663 Jan 30 14:09:05.245411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:09:05.259426 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:09:05.269156 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:09:05.278237 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:09:05.286542 KVP[1663]: KVP LIC Version: 3.1 Jan 30 14:09:05.290157 kernel: hv_utils: KVP IC version 4.0 Jan 30 14:09:05.297329 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:09:05.306189 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:09:05.312287 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:09:05.312851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:09:05.315418 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:09:05.327412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:09:05.339317 chronyd[1676]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 14:09:05.341786 chronyd[1676]: Timezone right/UTC failed leap second check, ignoring Jan 30 14:09:05.342025 chronyd[1676]: Loaded seccomp filter (level 2) Jan 30 14:09:05.345641 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 14:09:05.360767 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:09:05.361661 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:09:05.387149 jq[1661]: false Jan 30 14:09:05.389501 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:09:05.389771 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:09:05.392282 jq[1674]: true Jan 30 14:09:05.412838 jq[1689]: true Jan 30 14:09:05.448086 extend-filesystems[1662]: Found loop4 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found loop5 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found loop6 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found loop7 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda1 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda2 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda3 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found usr Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda4 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda6 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda7 Jan 30 14:09:05.454989 extend-filesystems[1662]: Found sda9 Jan 30 14:09:05.454989 extend-filesystems[1662]: Checking size of /dev/sda9 Jan 30 14:09:05.454356 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:09:05.599901 tar[1679]: linux-arm64/helm Jan 30 14:09:05.600336 update_engine[1673]: I20250130 14:09:05.506330 1673 main.cc:92] Flatcar Update Engine starting Jan 30 14:09:05.454559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:09:05.461014 (ntainerd)[1698]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:09:05.496362 systemd-logind[1672]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:09:05.503592 systemd-logind[1672]: New seat seat0. Jan 30 14:09:05.507878 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:09:05.857398 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:09:05.898533 extend-filesystems[1662]: Old size kept for /dev/sda9 Jan 30 14:09:05.898533 extend-filesystems[1662]: Found sr0 Jan 30 14:09:05.904713 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:09:05.904914 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:09:06.052218 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1720) Jan 30 14:09:06.132018 tar[1679]: linux-arm64/LICENSE Jan 30 14:09:06.132018 tar[1679]: linux-arm64/README.md Jan 30 14:09:06.140755 dbus-daemon[1658]: [system] SELinux support is enabled Jan 30 14:09:06.142493 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:09:06.156010 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:09:06.157300 update_engine[1673]: I20250130 14:09:06.156730 1673 update_check_scheduler.cc:74] Next update check in 6m47s Jan 30 14:09:06.166693 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:09:06.169111 dbus-daemon[1658]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:09:06.166756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:09:06.180080 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:09:06.180130 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:09:06.195373 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:09:06.211597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:09:06.294204 coreos-metadata[1657]: Jan 30 14:09:06.294 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 14:09:06.446853 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.295 INFO Fetch successful Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.295 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.300 INFO Fetch successful Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.300 INFO Fetching http://168.63.129.16/machine/5bf23d56-ebcc-4e79-8177-c7f024a87cc6/58c458d7%2D6aff%2D4a69%2D8d7c%2D0b7075e615fa.%5Fci%2D4081.3.0%2Da%2D554d7cc729?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.302 INFO Fetch successful Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.302 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 14:09:06.865809 coreos-metadata[1657]: Jan 30 14:09:06.417 INFO Fetch successful Jan 30 14:09:06.453444 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:09:07.041061 locksmithd[1766]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:09:07.262450 sshd_keygen[1688]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:09:07.269754 bash[1718]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:09:07.270841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:09:07.284702 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 14:09:07.297110 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:09:07.310760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:09:07.329329 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 14:09:07.339062 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:09:07.339873 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:09:07.362641 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:09:07.381533 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 14:09:07.476419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:09:07.485437 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:09:07.533753 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:09:07.551537 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:09:07.559139 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:09:07.570025 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:09:07.976702 kubelet[1808]: E0130 14:09:07.976621 1808 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:09:07.980158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:09:07.980367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:09:08.611150 containerd[1698]: time="2025-01-30T14:09:08.610827600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:09:08.639854 containerd[1698]: time="2025-01-30T14:09:08.639749200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642013 containerd[1698]: time="2025-01-30T14:09:08.641914680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642013 containerd[1698]: time="2025-01-30T14:09:08.641985520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:09:08.642013 containerd[1698]: time="2025-01-30T14:09:08.642012360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:09:08.642339 containerd[1698]: time="2025-01-30T14:09:08.642294360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:09:08.642339 containerd[1698]: time="2025-01-30T14:09:08.642337760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642462 containerd[1698]: time="2025-01-30T14:09:08.642428320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642462 containerd[1698]: time="2025-01-30T14:09:08.642449640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642777 containerd[1698]: time="2025-01-30T14:09:08.642689160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642777 containerd[1698]: time="2025-01-30T14:09:08.642716080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642777 containerd[1698]: time="2025-01-30T14:09:08.642740840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642777 containerd[1698]: time="2025-01-30T14:09:08.642753240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.642921 containerd[1698]: time="2025-01-30T14:09:08.642877160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.643257 containerd[1698]: time="2025-01-30T14:09:08.643212160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:09:08.643426 containerd[1698]: time="2025-01-30T14:09:08.643393360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:09:08.643426 containerd[1698]: time="2025-01-30T14:09:08.643422360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:09:08.643576 containerd[1698]: time="2025-01-30T14:09:08.643544760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:09:08.643632 containerd[1698]: time="2025-01-30T14:09:08.643615680Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:09:08.970413 containerd[1698]: time="2025-01-30T14:09:08.970290840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:09:08.970413 containerd[1698]: time="2025-01-30T14:09:08.970366000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:09:08.970413 containerd[1698]: time="2025-01-30T14:09:08.970387680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:09:08.970413 containerd[1698]: time="2025-01-30T14:09:08.970405920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:09:08.970589 containerd[1698]: time="2025-01-30T14:09:08.970423880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:09:08.970693 containerd[1698]: time="2025-01-30T14:09:08.970609680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.970944200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971174880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971227240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971242320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971258240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971271760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971285240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971301720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971318480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971332400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971346200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971397400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971420160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.971842 containerd[1698]: time="2025-01-30T14:09:08.971436680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971449600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971464080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971478560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971495640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971508520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971529040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971550880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971594160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971610040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971623040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971636200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971658680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971681040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971695160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972268 containerd[1698]: time="2025-01-30T14:09:08.971707160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971811840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971833520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971845520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971857760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971868000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971883040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971893160Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:09:08.972626 containerd[1698]: time="2025-01-30T14:09:08.971905320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:09:08.972787 containerd[1698]: time="2025-01-30T14:09:08.972316920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:09:08.972787 containerd[1698]: time="2025-01-30T14:09:08.972426760Z" level=info msg="Connect containerd service" Jan 30 14:09:08.972787 containerd[1698]: time="2025-01-30T14:09:08.972489480Z" level=info msg="using legacy CRI server" Jan 30 14:09:08.972787 containerd[1698]: time="2025-01-30T14:09:08.972496560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:09:08.972787 containerd[1698]: time="2025-01-30T14:09:08.972597160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:09:08.973502 containerd[1698]: time="2025-01-30T14:09:08.973444960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:09:08.973965 containerd[1698]: time="2025-01-30T14:09:08.973710560Z" level=info msg="Start subscribing containerd event" Jan 30 14:09:08.973965 containerd[1698]: time="2025-01-30T14:09:08.973774920Z" level=info msg="Start recovering state" Jan 30 14:09:08.973965 containerd[1698]: time="2025-01-30T14:09:08.973908000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:09:08.973965 containerd[1698]: time="2025-01-30T14:09:08.973961080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:09:08.974741 containerd[1698]: time="2025-01-30T14:09:08.974266160Z" level=info msg="Start event monitor" Jan 30 14:09:08.974741 containerd[1698]: time="2025-01-30T14:09:08.974301480Z" level=info msg="Start snapshots syncer" Jan 30 14:09:08.974741 containerd[1698]: time="2025-01-30T14:09:08.974320600Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:09:08.974741 containerd[1698]: time="2025-01-30T14:09:08.974341520Z" level=info msg="Start streaming server" Jan 30 14:09:08.974741 containerd[1698]: time="2025-01-30T14:09:08.974488520Z" level=info msg="containerd successfully booted in 0.364575s" Jan 30 14:09:08.974699 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:09:08.987726 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:09:08.996553 systemd[1]: Startup finished in 740ms (kernel) + 12.199s (initrd) + 15.473s (userspace) = 28.413s. Jan 30 14:09:11.231167 login[1811]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 30 14:09:11.232119 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:09:11.241241 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:09:11.251410 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:09:11.255953 systemd-logind[1672]: New session 2 of user core. Jan 30 14:09:11.265539 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:09:11.273470 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:09:11.421728 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:09:11.540603 systemd[1829]: Queued start job for default target default.target. Jan 30 14:09:11.556120 systemd[1829]: Created slice app.slice - User Application Slice. Jan 30 14:09:11.556156 systemd[1829]: Reached target paths.target - Paths. Jan 30 14:09:11.556169 systemd[1829]: Reached target timers.target - Timers. Jan 30 14:09:11.557566 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:09:11.570524 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:09:11.570657 systemd[1829]: Reached target sockets.target - Sockets. Jan 30 14:09:11.570671 systemd[1829]: Reached target basic.target - Basic System. Jan 30 14:09:11.570719 systemd[1829]: Reached target default.target - Main User Target. Jan 30 14:09:11.570750 systemd[1829]: Startup finished in 141ms. Jan 30 14:09:11.570847 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:09:11.579295 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:09:12.232824 login[1811]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:09:12.237726 systemd-logind[1672]: New session 1 of user core. Jan 30 14:09:12.244334 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:09:14.853592 waagent[1802]: 2025-01-30T14:09:14.853466Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 14:09:14.860374 waagent[1802]: 2025-01-30T14:09:14.860271Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 30 14:09:14.866196 waagent[1802]: 2025-01-30T14:09:14.866078Z INFO Daemon Daemon Python: 3.11.9 Jan 30 14:09:14.871578 waagent[1802]: 2025-01-30T14:09:14.871492Z INFO Daemon Daemon Run daemon Jan 30 14:09:14.876087 waagent[1802]: 2025-01-30T14:09:14.876005Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 30 14:09:14.886140 waagent[1802]: 2025-01-30T14:09:14.886041Z INFO Daemon Daemon Using waagent for provisioning Jan 30 14:09:14.892332 waagent[1802]: 2025-01-30T14:09:14.892271Z INFO Daemon Daemon Activate resource disk Jan 30 14:09:14.898011 waagent[1802]: 2025-01-30T14:09:14.897934Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 14:09:14.910404 waagent[1802]: 2025-01-30T14:09:14.910312Z INFO Daemon Daemon Found device: None Jan 30 14:09:14.915782 waagent[1802]: 2025-01-30T14:09:14.915709Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 14:09:14.924566 waagent[1802]: 2025-01-30T14:09:14.924491Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 14:09:14.940433 waagent[1802]: 2025-01-30T14:09:14.940345Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:09:14.948067 waagent[1802]: 2025-01-30T14:09:14.947986Z INFO Daemon Daemon Running default provisioning handler Jan 30 14:09:14.963364 waagent[1802]: 2025-01-30T14:09:14.963265Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 14:09:14.978686 waagent[1802]: 2025-01-30T14:09:14.978593Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 14:09:14.988596 waagent[1802]: 2025-01-30T14:09:14.988510Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 14:09:14.994464 waagent[1802]: 2025-01-30T14:09:14.994379Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 14:09:15.590022 waagent[1802]: 2025-01-30T14:09:15.586477Z INFO Daemon Daemon Successfully mounted dvd Jan 30 14:09:15.604876 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 14:09:15.605340 waagent[1802]: 2025-01-30T14:09:15.604988Z INFO Daemon Daemon Detect protocol endpoint Jan 30 14:09:15.610720 waagent[1802]: 2025-01-30T14:09:15.610619Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 14:09:15.617482 waagent[1802]: 2025-01-30T14:09:15.617398Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 14:09:15.624926 waagent[1802]: 2025-01-30T14:09:15.624849Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 14:09:15.630693 waagent[1802]: 2025-01-30T14:09:15.630617Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 14:09:15.637224 waagent[1802]: 2025-01-30T14:09:15.637135Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 14:09:15.735595 waagent[1802]: 2025-01-30T14:09:15.735529Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 14:09:15.742586 waagent[1802]: 2025-01-30T14:09:15.742542Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 14:09:15.748260 waagent[1802]: 2025-01-30T14:09:15.748181Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 14:09:16.229130 waagent[1802]: 2025-01-30T14:09:16.228148Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 14:09:16.235067 waagent[1802]: 2025-01-30T14:09:16.234979Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 14:09:16.244903 waagent[1802]: 2025-01-30T14:09:16.244828Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:09:16.312312 waagent[1802]: 2025-01-30T14:09:16.312262Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 14:09:16.318230 waagent[1802]: 2025-01-30T14:09:16.318176Z INFO Daemon Jan 30 14:09:16.321168 waagent[1802]: 2025-01-30T14:09:16.321086Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6fe2e927-9213-44b9-af53-d51724ef7410 eTag: 11906003376009945539 source: Fabric] Jan 30 14:09:16.332709 waagent[1802]: 2025-01-30T14:09:16.332649Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 14:09:16.339830 waagent[1802]: 2025-01-30T14:09:16.339776Z INFO Daemon Jan 30 14:09:16.342706 waagent[1802]: 2025-01-30T14:09:16.342636Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:09:16.353811 waagent[1802]: 2025-01-30T14:09:16.353760Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 14:09:16.450230 waagent[1802]: 2025-01-30T14:09:16.450116Z INFO Daemon Downloaded certificate {'thumbprint': '549379DA41B0F24F8648696552B4365DDC59C627', 'hasPrivateKey': False} Jan 30 14:09:16.461316 waagent[1802]: 2025-01-30T14:09:16.461254Z INFO Daemon Downloaded certificate {'thumbprint': '1F0B6F4D82DF602F90E0DF6BCBEB4AED5E51A7B0', 'hasPrivateKey': True} Jan 30 14:09:16.472579 waagent[1802]: 2025-01-30T14:09:16.472515Z INFO Daemon Fetch goal state completed Jan 30 14:09:16.487173 waagent[1802]: 2025-01-30T14:09:16.487038Z INFO Daemon Daemon Starting provisioning Jan 30 14:09:16.493232 waagent[1802]: 2025-01-30T14:09:16.493138Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 14:09:16.498829 waagent[1802]: 2025-01-30T14:09:16.498750Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-554d7cc729] Jan 30 14:09:16.771513 waagent[1802]: 2025-01-30T14:09:16.771417Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-554d7cc729] Jan 30 14:09:16.778001 waagent[1802]: 2025-01-30T14:09:16.777917Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 14:09:16.784224 waagent[1802]: 2025-01-30T14:09:16.784157Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 14:09:16.981707 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:09:16.981716 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:09:16.981750 systemd-networkd[1449]: eth0: DHCP lease lost Jan 30 14:09:16.983649 waagent[1802]: 2025-01-30T14:09:16.983338Z INFO Daemon Daemon Create user account if not exists Jan 30 14:09:16.989290 waagent[1802]: 2025-01-30T14:09:16.989206Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 14:09:16.989427 systemd-networkd[1449]: eth0: DHCPv6 lease lost Jan 30 14:09:16.995307 waagent[1802]: 2025-01-30T14:09:16.995221Z INFO Daemon Daemon Configure sudoer Jan 30 14:09:17.000163 waagent[1802]: 2025-01-30T14:09:17.000048Z INFO Daemon Daemon Configure sshd Jan 30 14:09:17.005006 waagent[1802]: 2025-01-30T14:09:17.004922Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 14:09:17.018955 waagent[1802]: 2025-01-30T14:09:17.018827Z INFO Daemon Daemon Deploy ssh public key. Jan 30 14:09:17.045219 systemd-networkd[1449]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 30 14:09:18.107222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:09:18.117345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:09:18.310150 waagent[1802]: 2025-01-30T14:09:18.306646Z INFO Daemon Daemon Provisioning complete Jan 30 14:09:18.334808 waagent[1802]: 2025-01-30T14:09:18.334728Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 14:09:18.341389 waagent[1802]: 2025-01-30T14:09:18.341309Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 14:09:18.351510 waagent[1802]: 2025-01-30T14:09:18.351423Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 14:09:18.514817 waagent[1889]: 2025-01-30T14:09:18.514635Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 14:09:18.516363 waagent[1889]: 2025-01-30T14:09:18.515462Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 30 14:09:18.516363 waagent[1889]: 2025-01-30T14:09:18.515589Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 30 14:09:20.349090 waagent[1889]: 2025-01-30T14:09:20.348825Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 14:09:20.355136 waagent[1889]: 2025-01-30T14:09:20.352818Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:09:20.355136 waagent[1889]: 2025-01-30T14:09:20.352992Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:09:20.364161 waagent[1889]: 2025-01-30T14:09:20.364020Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 14:09:20.373082 waagent[1889]: 2025-01-30T14:09:20.373010Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 14:09:20.373820 waagent[1889]: 2025-01-30T14:09:20.373757Z INFO ExtHandler Jan 30 14:09:20.373900 waagent[1889]: 2025-01-30T14:09:20.373869Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 96382dc4-f2f0-49cc-8a59-f9f60b443c68 eTag: 11906003376009945539 source: Fabric] Jan 30 14:09:20.374294 waagent[1889]: 2025-01-30T14:09:20.374251Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 14:09:20.399143 waagent[1889]: 2025-01-30T14:09:20.398662Z INFO ExtHandler Jan 30 14:09:20.399143 waagent[1889]: 2025-01-30T14:09:20.398867Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 14:09:20.404058 waagent[1889]: 2025-01-30T14:09:20.404011Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 14:09:20.569157 waagent[1889]: 2025-01-30T14:09:20.567527Z INFO ExtHandler Downloaded certificate {'thumbprint': '549379DA41B0F24F8648696552B4365DDC59C627', 'hasPrivateKey': False} Jan 30 14:09:20.569157 waagent[1889]: 2025-01-30T14:09:20.568224Z INFO ExtHandler Downloaded certificate {'thumbprint': '1F0B6F4D82DF602F90E0DF6BCBEB4AED5E51A7B0', 'hasPrivateKey': True} Jan 30 14:09:20.569157 waagent[1889]: 2025-01-30T14:09:20.568703Z INFO ExtHandler Fetch goal state completed Jan 30 14:09:20.587672 waagent[1889]: 2025-01-30T14:09:20.587575Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1889 Jan 30 14:09:20.587865 waagent[1889]: 2025-01-30T14:09:20.587822Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 14:09:20.589924 waagent[1889]: 2025-01-30T14:09:20.589855Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 14:09:20.590395 waagent[1889]: 2025-01-30T14:09:20.590351Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 14:09:20.671612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:09:20.681235 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:09:20.683476 waagent[1889]: 2025-01-30T14:09:20.682844Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 14:09:20.683476 waagent[1889]: 2025-01-30T14:09:20.683157Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 14:09:20.692688 waagent[1889]: 2025-01-30T14:09:20.692637Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 14:09:20.701621 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit waagent.service)... Jan 30 14:09:20.701638 systemd[1]: Reloading... Jan 30 14:09:20.794486 kubelet[1910]: E0130 14:09:20.794417 1910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:09:20.818135 zram_generator::config[1955]: No configuration found. Jan 30 14:09:20.955999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:09:21.033058 systemd[1]: Reloading finished in 331 ms. Jan 30 14:09:21.057163 waagent[1889]: 2025-01-30T14:09:21.056925Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 14:09:21.057979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:09:21.058147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:09:21.067016 systemd[1]: Reloading requested from client PID 2009 ('systemctl') (unit waagent.service)... Jan 30 14:09:21.067037 systemd[1]: Reloading... Jan 30 14:09:21.171156 zram_generator::config[2044]: No configuration found. Jan 30 14:09:21.282389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:09:21.362156 systemd[1]: Reloading finished in 294 ms. Jan 30 14:09:21.383986 waagent[1889]: 2025-01-30T14:09:21.382849Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 14:09:21.383986 waagent[1889]: 2025-01-30T14:09:21.383091Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 14:09:21.729154 waagent[1889]: 2025-01-30T14:09:21.728706Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 14:09:21.729582 waagent[1889]: 2025-01-30T14:09:21.729504Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 14:09:21.730651 waagent[1889]: 2025-01-30T14:09:21.730528Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 14:09:21.731253 waagent[1889]: 2025-01-30T14:09:21.731085Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 14:09:21.731687 waagent[1889]: 2025-01-30T14:09:21.731530Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 14:09:21.731790 waagent[1889]: 2025-01-30T14:09:21.731673Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 14:09:21.732388 waagent[1889]: 2025-01-30T14:09:21.732258Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 14:09:21.732504 waagent[1889]: 2025-01-30T14:09:21.732388Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 14:09:21.733172 waagent[1889]: 2025-01-30T14:09:21.733005Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 14:09:21.733172 waagent[1889]: 2025-01-30T14:09:21.733129Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:09:21.734142 waagent[1889]: 2025-01-30T14:09:21.733585Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 14:09:21.734142 waagent[1889]: 2025-01-30T14:09:21.733717Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:09:21.734142 waagent[1889]: 2025-01-30T14:09:21.733996Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 14:09:21.734472 waagent[1889]: 2025-01-30T14:09:21.734312Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 14:09:21.734472 waagent[1889]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 14:09:21.734472 waagent[1889]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 14:09:21.734472 waagent[1889]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 14:09:21.734472 waagent[1889]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:09:21.734472 waagent[1889]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:09:21.734472 waagent[1889]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 14:09:21.738920 waagent[1889]: 2025-01-30T14:09:21.738274Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 14:09:21.738920 waagent[1889]: 2025-01-30T14:09:21.738491Z INFO EnvHandler ExtHandler Configure routes Jan 30 14:09:21.738920 waagent[1889]: 2025-01-30T14:09:21.738560Z INFO EnvHandler ExtHandler Gateway:None Jan 30 14:09:21.738920 waagent[1889]: 2025-01-30T14:09:21.738601Z INFO EnvHandler ExtHandler Routes:None Jan 30 14:09:21.752244 waagent[1889]: 2025-01-30T14:09:21.752177Z INFO ExtHandler ExtHandler Jan 30 14:09:21.752851 waagent[1889]: 2025-01-30T14:09:21.752793Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 483bc435-4fe8-4a48-9e6b-cc2edd749ba2 correlation be567a2c-ae3c-4b31-bfe5-9d44a656e848 created: 2025-01-30T14:07:56.579071Z] Jan 30 14:09:21.753514 waagent[1889]: 2025-01-30T14:09:21.753457Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 14:09:21.754483 waagent[1889]: 2025-01-30T14:09:21.754415Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 30 14:09:21.776287 waagent[1889]: 2025-01-30T14:09:21.776177Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 14:09:21.776287 waagent[1889]: Executing ['ip', '-a', '-o', 'link']: Jan 30 14:09:21.776287 waagent[1889]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 14:09:21.776287 waagent[1889]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bf:e1:15 brd ff:ff:ff:ff:ff:ff Jan 30 14:09:21.776287 waagent[1889]: 3: enP58133s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bf:e1:15 brd ff:ff:ff:ff:ff:ff\ altname enP58133p0s2 Jan 30 14:09:21.776287 waagent[1889]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 14:09:21.776287 waagent[1889]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 14:09:21.776287 waagent[1889]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 14:09:21.776287 waagent[1889]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 14:09:21.776287 waagent[1889]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 14:09:21.776287 waagent[1889]: 2: eth0 inet6 fe80::222:48ff:febf:e115/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:09:21.776287 waagent[1889]: 3: enP58133s1 inet6 fe80::222:48ff:febf:e115/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 14:09:21.808398 waagent[1889]: 2025-01-30T14:09:21.808304Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 748A8F98-290E-4CAB-A6DA-4F5A5EF0B988;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 14:09:21.882205 waagent[1889]: 2025-01-30T14:09:21.881444Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 14:09:21.882205 waagent[1889]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.882205 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.882205 waagent[1889]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.882205 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.882205 waagent[1889]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.882205 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.882205 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:09:21.882205 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:09:21.882205 waagent[1889]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:09:21.886723 waagent[1889]: 2025-01-30T14:09:21.886046Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 14:09:21.886723 waagent[1889]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.886723 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.886723 waagent[1889]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.886723 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.886723 waagent[1889]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 14:09:21.886723 waagent[1889]: pkts bytes target prot opt in out source destination Jan 30 14:09:21.886723 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 14:09:21.886723 waagent[1889]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 14:09:21.886723 waagent[1889]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 14:09:21.886723 waagent[1889]: 2025-01-30T14:09:21.886563Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 14:09:29.132354 chronyd[1676]: Selected source PHC0 Jan 30 14:09:31.107426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:09:31.114436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:09:31.247495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:09:31.259454 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:09:31.312750 kubelet[2137]: E0130 14:09:31.312669 2137 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:09:31.315769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:09:31.316190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:09:41.357513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:09:41.363347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:09:41.626344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:09:41.641435 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:09:41.690246 kubelet[2153]: E0130 14:09:41.690177 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:09:41.692580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:09:41.692717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:09:47.341770 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 30 14:09:51.618369 update_engine[1673]: I20250130 14:09:51.618238 1673 update_attempter.cc:509] Updating boot flags... Jan 30 14:09:51.693682 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2173) Jan 30 14:09:51.708406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:09:51.717510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:09:51.823159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2175) Jan 30 14:09:51.990340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:09:51.991439 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:09:52.042583 kubelet[2235]: E0130 14:09:52.042506 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:09:52.045910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:09:52.046284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:02.107415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:10:02.116372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:02.371359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:02.382784 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:02.430081 kubelet[2250]: E0130 14:10:02.430027 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:02.433352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:02.433510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:03.382024 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:10:03.390425 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:46086.service - OpenSSH per-connection server daemon (10.200.16.10:46086). Jan 30 14:10:03.908776 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 46086 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:03.910648 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:03.915705 systemd-logind[1672]: New session 3 of user core. Jan 30 14:10:03.925279 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:10:04.321506 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:46102.service - OpenSSH per-connection server daemon (10.200.16.10:46102). Jan 30 14:10:04.763576 sshd[2265]: Accepted publickey for core from 10.200.16.10 port 46102 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:04.765454 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:04.771706 systemd-logind[1672]: New session 4 of user core. Jan 30 14:10:04.785313 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:10:05.086529 sshd[2265]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:05.091498 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:46102.service: Deactivated successfully. Jan 30 14:10:05.093329 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:10:05.094061 systemd-logind[1672]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:10:05.095837 systemd-logind[1672]: Removed session 4. Jan 30 14:10:05.170169 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:46110.service - OpenSSH per-connection server daemon (10.200.16.10:46110). Jan 30 14:10:05.622003 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 46110 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:05.623586 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:05.629192 systemd-logind[1672]: New session 5 of user core. Jan 30 14:10:05.634405 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:10:05.953657 sshd[2272]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:05.959351 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:46110.service: Deactivated successfully. Jan 30 14:10:05.963653 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:10:05.964613 systemd-logind[1672]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:10:05.965825 systemd-logind[1672]: Removed session 5. Jan 30 14:10:06.036432 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:51816.service - OpenSSH per-connection server daemon (10.200.16.10:51816). Jan 30 14:10:06.467926 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 51816 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:06.469630 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:06.476366 systemd-logind[1672]: New session 6 of user core. Jan 30 14:10:06.482346 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:10:06.789438 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:06.793942 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:51816.service: Deactivated successfully. Jan 30 14:10:06.795921 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:10:06.796830 systemd-logind[1672]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:10:06.798186 systemd-logind[1672]: Removed session 6. Jan 30 14:10:06.879454 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:51828.service - OpenSSH per-connection server daemon (10.200.16.10:51828). Jan 30 14:10:07.308782 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 51828 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:07.311032 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:07.318046 systemd-logind[1672]: New session 7 of user core. Jan 30 14:10:07.324319 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:10:07.657298 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:10:07.657615 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:07.689442 sudo[2289]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:07.776598 sshd[2286]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:07.781487 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:51828.service: Deactivated successfully. Jan 30 14:10:07.783537 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:10:07.784399 systemd-logind[1672]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:10:07.786971 systemd-logind[1672]: Removed session 7. Jan 30 14:10:07.864472 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:51830.service - OpenSSH per-connection server daemon (10.200.16.10:51830). Jan 30 14:10:08.295596 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 51830 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:08.297963 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:08.303631 systemd-logind[1672]: New session 8 of user core. Jan 30 14:10:08.312349 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:10:08.545677 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:10:08.545987 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:08.550865 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:08.556808 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:10:08.557135 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:08.578417 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:08.580165 auditctl[2301]: No rules Jan 30 14:10:08.581399 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:10:08.581643 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:08.586930 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:08.626011 augenrules[2319]: No rules Jan 30 14:10:08.628168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:08.629533 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:08.699840 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:08.702977 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:51830.service: Deactivated successfully. Jan 30 14:10:08.704921 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:10:08.706808 systemd-logind[1672]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:10:08.708352 systemd-logind[1672]: Removed session 8. Jan 30 14:10:08.789429 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:51842.service - OpenSSH per-connection server daemon (10.200.16.10:51842). Jan 30 14:10:09.218977 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 51842 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:10:09.220670 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:10:09.225513 systemd-logind[1672]: New session 9 of user core. Jan 30 14:10:09.236289 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:10:09.468376 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:10:09.468690 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:10:10.579563 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:10:10.580940 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:10:11.322119 dockerd[2345]: time="2025-01-30T14:10:11.320352395Z" level=info msg="Starting up" Jan 30 14:10:11.698618 dockerd[2345]: time="2025-01-30T14:10:11.698562557Z" level=info msg="Loading containers: start." Jan 30 14:10:11.839134 kernel: Initializing XFRM netlink socket Jan 30 14:10:11.992484 systemd-networkd[1449]: docker0: Link UP Jan 30 14:10:12.017716 dockerd[2345]: time="2025-01-30T14:10:12.017570673Z" level=info msg="Loading containers: done." Jan 30 14:10:12.035144 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2670216389-merged.mount: Deactivated successfully. Jan 30 14:10:12.044333 dockerd[2345]: time="2025-01-30T14:10:12.044270930Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:10:12.044491 dockerd[2345]: time="2025-01-30T14:10:12.044410010Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:10:12.044561 dockerd[2345]: time="2025-01-30T14:10:12.044538610Z" level=info msg="Daemon has completed initialization" Jan 30 14:10:12.100155 dockerd[2345]: time="2025-01-30T14:10:12.100011848Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:10:12.100440 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:10:12.607280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:10:12.616379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:12.723655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:12.735430 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:12.783659 kubelet[2489]: E0130 14:10:12.783596 2489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:12.787064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:12.787403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:13.704086 containerd[1698]: time="2025-01-30T14:10:13.704026208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:10:14.464311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623350266.mount: Deactivated successfully. Jan 30 14:10:15.737604 containerd[1698]: time="2025-01-30T14:10:15.737532039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:15.741390 containerd[1698]: time="2025-01-30T14:10:15.741307007Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 30 14:10:15.747137 containerd[1698]: time="2025-01-30T14:10:15.745904617Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:15.750411 containerd[1698]: time="2025-01-30T14:10:15.750322346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:15.751877 containerd[1698]: time="2025-01-30T14:10:15.751624349Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.047543381s" Jan 30 14:10:15.751877 containerd[1698]: time="2025-01-30T14:10:15.751676909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:10:15.775265 containerd[1698]: time="2025-01-30T14:10:15.774983998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:10:17.158143 containerd[1698]: time="2025-01-30T14:10:17.158038690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:17.161767 containerd[1698]: time="2025-01-30T14:10:17.161711178Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 30 14:10:17.168131 containerd[1698]: time="2025-01-30T14:10:17.166872029Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:17.173987 containerd[1698]: time="2025-01-30T14:10:17.173934804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:17.175359 containerd[1698]: time="2025-01-30T14:10:17.175308207Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.400280729s" Jan 30 14:10:17.175359 containerd[1698]: time="2025-01-30T14:10:17.175357047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:10:17.197130 containerd[1698]: time="2025-01-30T14:10:17.197070333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:10:18.268159 containerd[1698]: time="2025-01-30T14:10:18.267732653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:18.270628 containerd[1698]: time="2025-01-30T14:10:18.270341100Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 30 14:10:18.274657 containerd[1698]: time="2025-01-30T14:10:18.274132389Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:18.281562 containerd[1698]: time="2025-01-30T14:10:18.281511286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:18.282541 containerd[1698]: time="2025-01-30T14:10:18.282494128Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.085357075s" Jan 30 14:10:18.282541 containerd[1698]: time="2025-01-30T14:10:18.282538128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:10:18.304186 containerd[1698]: time="2025-01-30T14:10:18.304134259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:10:19.724433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833521219.mount: Deactivated successfully. Jan 30 14:10:20.206090 containerd[1698]: time="2025-01-30T14:10:20.206020700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:20.211498 containerd[1698]: time="2025-01-30T14:10:20.211421593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 14:10:20.216674 containerd[1698]: time="2025-01-30T14:10:20.216604485Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:20.224091 containerd[1698]: time="2025-01-30T14:10:20.224039503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:20.225232 containerd[1698]: time="2025-01-30T14:10:20.224730584Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.920545845s" Jan 30 14:10:20.225232 containerd[1698]: time="2025-01-30T14:10:20.224776584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:10:20.246955 containerd[1698]: time="2025-01-30T14:10:20.246913357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:10:20.981062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193585176.mount: Deactivated successfully. Jan 30 14:10:22.279159 containerd[1698]: time="2025-01-30T14:10:22.278404183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:22.282748 containerd[1698]: time="2025-01-30T14:10:22.282447513Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:10:22.288132 containerd[1698]: time="2025-01-30T14:10:22.286464042Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:22.292549 containerd[1698]: time="2025-01-30T14:10:22.292487416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:22.294015 containerd[1698]: time="2025-01-30T14:10:22.293957060Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.046804103s" Jan 30 14:10:22.294015 containerd[1698]: time="2025-01-30T14:10:22.294006460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:10:22.316131 containerd[1698]: time="2025-01-30T14:10:22.316052952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:10:22.857267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:10:22.867412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:22.975365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:22.985531 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:23.030348 kubelet[2645]: E0130 14:10:23.030260 2645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:23.033565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:23.033886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:10:23.287434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520777166.mount: Deactivated successfully. Jan 30 14:10:23.314822 containerd[1698]: time="2025-01-30T14:10:23.314766545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:23.316871 containerd[1698]: time="2025-01-30T14:10:23.316827910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 30 14:10:23.321598 containerd[1698]: time="2025-01-30T14:10:23.321531681Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:23.326329 containerd[1698]: time="2025-01-30T14:10:23.326258332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:23.327371 containerd[1698]: time="2025-01-30T14:10:23.327202654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.011100822s" Jan 30 14:10:23.327371 containerd[1698]: time="2025-01-30T14:10:23.327249574Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:10:23.349609 containerd[1698]: time="2025-01-30T14:10:23.349295266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:10:25.061627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197107731.mount: Deactivated successfully. Jan 30 14:10:26.870609 containerd[1698]: time="2025-01-30T14:10:26.869287638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:26.872557 containerd[1698]: time="2025-01-30T14:10:26.872512085Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 30 14:10:26.876362 containerd[1698]: time="2025-01-30T14:10:26.876297334Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:26.881536 containerd[1698]: time="2025-01-30T14:10:26.881453985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:26.882849 containerd[1698]: time="2025-01-30T14:10:26.882693228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.533355882s" Jan 30 14:10:26.882849 containerd[1698]: time="2025-01-30T14:10:26.882744268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:10:32.411427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:32.420386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:32.447405 systemd[1]: Reloading requested from client PID 2766 ('systemctl') (unit session-9.scope)... Jan 30 14:10:32.447423 systemd[1]: Reloading... Jan 30 14:10:32.572638 zram_generator::config[2807]: No configuration found. Jan 30 14:10:32.693258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:32.772548 systemd[1]: Reloading finished in 324 ms. Jan 30 14:10:32.822829 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:10:32.822927 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:10:32.823234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:32.828484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:32.943012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:32.950351 (kubelet)[2873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:10:33.004325 kubelet[2873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:33.004325 kubelet[2873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:10:33.004325 kubelet[2873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:33.007181 kubelet[2873]: I0130 14:10:33.006174 2873 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:10:33.777058 kubelet[2873]: I0130 14:10:33.777010 2873 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:10:33.777349 kubelet[2873]: I0130 14:10:33.777332 2873 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:10:33.777665 kubelet[2873]: I0130 14:10:33.777648 2873 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:10:33.791064 kubelet[2873]: E0130 14:10:33.791014 2873 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.791679 kubelet[2873]: I0130 14:10:33.791644 2873 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:10:33.800634 kubelet[2873]: I0130 14:10:33.800597 2873 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:10:33.802133 kubelet[2873]: I0130 14:10:33.802067 2873 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:10:33.802487 kubelet[2873]: I0130 14:10:33.802278 2873 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-554d7cc729","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:10:33.802633 kubelet[2873]: I0130 14:10:33.802619 2873 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:10:33.802686 kubelet[2873]: I0130 14:10:33.802678 2873 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:10:33.802914 kubelet[2873]: I0130 14:10:33.802896 2873 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:33.803806 kubelet[2873]: I0130 14:10:33.803784 2873 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:10:33.803912 kubelet[2873]: I0130 14:10:33.803901 2873 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:10:33.804000 kubelet[2873]: I0130 14:10:33.803990 2873 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:10:33.804069 kubelet[2873]: I0130 14:10:33.804057 2873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:10:33.805309 kubelet[2873]: I0130 14:10:33.805288 2873 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:10:33.805650 kubelet[2873]: I0130 14:10:33.805636 2873 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:10:33.805878 kubelet[2873]: W0130 14:10:33.805764 2873 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:10:33.807216 kubelet[2873]: W0130 14:10:33.807140 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.808143 kubelet[2873]: E0130 14:10:33.807376 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.808143 kubelet[2873]: W0130 14:10:33.807713 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.808143 kubelet[2873]: E0130 14:10:33.807762 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.808859 kubelet[2873]: I0130 14:10:33.808837 2873 server.go:1264] "Started kubelet" Jan 30 14:10:33.813802 kubelet[2873]: I0130 14:10:33.813746 2873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:10:33.815145 kubelet[2873]: I0130 14:10:33.815051 2873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:10:33.815736 kubelet[2873]: I0130 14:10:33.815712 2873 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:10:33.817030 kubelet[2873]: E0130 14:10:33.816900 2873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-554d7cc729.181f7dbd72558b1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-554d7cc729,UID:ci-4081.3.0-a-554d7cc729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-554d7cc729,},FirstTimestamp:2025-01-30 14:10:33.808800542 +0000 UTC m=+0.854454470,LastTimestamp:2025-01-30 14:10:33.808800542 +0000 UTC m=+0.854454470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-554d7cc729,}" Jan 30 14:10:33.817260 kubelet[2873]: I0130 14:10:33.817238 2873 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:10:33.818266 kubelet[2873]: I0130 14:10:33.818238 2873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:10:33.823195 kubelet[2873]: I0130 14:10:33.822330 2873 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:10:33.823195 kubelet[2873]: I0130 14:10:33.822482 2873 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:10:33.823195 kubelet[2873]: I0130 14:10:33.822572 2873 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:10:33.823195 kubelet[2873]: W0130 14:10:33.822978 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.823195 kubelet[2873]: E0130 14:10:33.823026 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.825151 kubelet[2873]: E0130 14:10:33.825073 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-554d7cc729?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Jan 30 14:10:33.826217 kubelet[2873]: I0130 14:10:33.826190 2873 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:10:33.826482 kubelet[2873]: I0130 14:10:33.826452 2873 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:10:33.827032 kubelet[2873]: E0130 14:10:33.827000 2873 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:10:33.828694 kubelet[2873]: I0130 14:10:33.828663 2873 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:10:33.881197 kubelet[2873]: I0130 14:10:33.881149 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:10:33.883878 kubelet[2873]: I0130 14:10:33.883825 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:10:33.884115 kubelet[2873]: I0130 14:10:33.884084 2873 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:10:33.884250 kubelet[2873]: I0130 14:10:33.884238 2873 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:10:33.884395 kubelet[2873]: E0130 14:10:33.884360 2873 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:10:33.886465 kubelet[2873]: W0130 14:10:33.886422 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.887364 kubelet[2873]: E0130 14:10:33.887338 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:33.984663 kubelet[2873]: E0130 14:10:33.984609 2873 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:10:34.026695 kubelet[2873]: E0130 14:10:34.026638 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-554d7cc729?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Jan 30 14:10:34.127988 kubelet[2873]: I0130 14:10:34.127812 2873 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.128878 kubelet[2873]: E0130 14:10:34.128518 2873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.128878 kubelet[2873]: I0130 14:10:34.128825 2873 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:10:34.128878 kubelet[2873]: I0130 14:10:34.128837 2873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:10:34.128878 kubelet[2873]: I0130 14:10:34.128863 2873 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:34.133749 kubelet[2873]: I0130 14:10:34.133710 2873 policy_none.go:49] "None policy: Start" Jan 30 14:10:34.134846 kubelet[2873]: I0130 14:10:34.134818 2873 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:10:34.134976 kubelet[2873]: I0130 14:10:34.134860 2873 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:10:34.143851 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:10:34.153453 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:10:34.157148 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:10:34.169668 kubelet[2873]: I0130 14:10:34.169037 2873 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:10:34.169668 kubelet[2873]: I0130 14:10:34.169301 2873 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:10:34.169668 kubelet[2873]: I0130 14:10:34.169431 2873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:10:34.172776 kubelet[2873]: E0130 14:10:34.172718 2873 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:34.185821 kubelet[2873]: I0130 14:10:34.185463 2873 topology_manager.go:215] "Topology Admit Handler" podUID="06854d8474fcdb28faaefb8be3b7c24b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.187364 kubelet[2873]: I0130 14:10:34.187326 2873 topology_manager.go:215] "Topology Admit Handler" podUID="42091109dee8187b44f65516c6d1723e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.190068 kubelet[2873]: I0130 14:10:34.189865 2873 topology_manager.go:215] "Topology Admit Handler" podUID="bf634693ab4b20754b974345fa171838" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.198228 systemd[1]: Created slice kubepods-burstable-pod06854d8474fcdb28faaefb8be3b7c24b.slice - libcontainer container kubepods-burstable-pod06854d8474fcdb28faaefb8be3b7c24b.slice. Jan 30 14:10:34.206434 systemd[1]: Created slice kubepods-burstable-pod42091109dee8187b44f65516c6d1723e.slice - libcontainer container kubepods-burstable-pod42091109dee8187b44f65516c6d1723e.slice. Jan 30 14:10:34.220251 systemd[1]: Created slice kubepods-burstable-podbf634693ab4b20754b974345fa171838.slice - libcontainer container kubepods-burstable-podbf634693ab4b20754b974345fa171838.slice. Jan 30 14:10:34.225128 kubelet[2873]: I0130 14:10:34.224895 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225128 kubelet[2873]: I0130 14:10:34.224932 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225128 kubelet[2873]: I0130 14:10:34.224953 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225128 kubelet[2873]: I0130 14:10:34.224968 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf634693ab4b20754b974345fa171838-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-554d7cc729\" (UID: \"bf634693ab4b20754b974345fa171838\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225128 kubelet[2873]: I0130 14:10:34.224982 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225363 kubelet[2873]: I0130 14:10:34.224996 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225363 kubelet[2873]: I0130 14:10:34.225010 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225363 kubelet[2873]: I0130 14:10:34.225024 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.225363 kubelet[2873]: I0130 14:10:34.225040 2873 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.331492 kubelet[2873]: I0130 14:10:34.331011 2873 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.331772 kubelet[2873]: E0130 14:10:34.331743 2873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.427710 kubelet[2873]: E0130 14:10:34.427562 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-554d7cc729?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Jan 30 14:10:34.505600 containerd[1698]: time="2025-01-30T14:10:34.505551370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-554d7cc729,Uid:06854d8474fcdb28faaefb8be3b7c24b,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:34.519323 containerd[1698]: time="2025-01-30T14:10:34.519254639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-554d7cc729,Uid:42091109dee8187b44f65516c6d1723e,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:34.523610 containerd[1698]: time="2025-01-30T14:10:34.523389088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-554d7cc729,Uid:bf634693ab4b20754b974345fa171838,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:34.677964 kubelet[2873]: W0130 14:10:34.677791 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:34.677964 kubelet[2873]: E0130 14:10:34.677846 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:34.720624 kubelet[2873]: W0130 14:10:34.720517 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:34.720624 kubelet[2873]: E0130 14:10:34.720593 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:34.734510 kubelet[2873]: I0130 14:10:34.733864 2873 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:34.734510 kubelet[2873]: E0130 14:10:34.734221 2873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:35.088664 kubelet[2873]: W0130 14:10:35.088433 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:35.088664 kubelet[2873]: E0130 14:10:35.088521 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:35.158643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771243621.mount: Deactivated successfully. Jan 30 14:10:35.194149 containerd[1698]: time="2025-01-30T14:10:35.193082338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:35.195421 containerd[1698]: time="2025-01-30T14:10:35.195338023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:10:35.198872 containerd[1698]: time="2025-01-30T14:10:35.198823310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:35.202265 containerd[1698]: time="2025-01-30T14:10:35.201473956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:35.204984 containerd[1698]: time="2025-01-30T14:10:35.204929803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:10:35.208481 containerd[1698]: time="2025-01-30T14:10:35.208430171Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:35.212062 containerd[1698]: time="2025-01-30T14:10:35.211979739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:10:35.216853 containerd[1698]: time="2025-01-30T14:10:35.216789069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:10:35.217994 containerd[1698]: time="2025-01-30T14:10:35.217710151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 698.369871ms" Jan 30 14:10:35.220115 containerd[1698]: time="2025-01-30T14:10:35.220052156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 696.586027ms" Jan 30 14:10:35.220816 containerd[1698]: time="2025-01-30T14:10:35.220772518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 715.131588ms" Jan 30 14:10:35.229875 kubelet[2873]: E0130 14:10:35.229819 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-554d7cc729?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" Jan 30 14:10:35.252809 kubelet[2873]: W0130 14:10:35.252725 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:35.252809 kubelet[2873]: E0130 14:10:35.252810 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:35.536171 kubelet[2873]: I0130 14:10:35.536131 2873 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:35.536691 kubelet[2873]: E0130 14:10:35.536649 2873 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:35.958403 kubelet[2873]: E0130 14:10:35.958351 2873 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:36.611690 containerd[1698]: time="2025-01-30T14:10:36.611440807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:36.611690 containerd[1698]: time="2025-01-30T14:10:36.611529047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:36.611690 containerd[1698]: time="2025-01-30T14:10:36.611548407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.619184 containerd[1698]: time="2025-01-30T14:10:36.613471812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624478515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624755556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624928356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624668476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624723756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624739876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.625532 containerd[1698]: time="2025-01-30T14:10:36.624843636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.626805 containerd[1698]: time="2025-01-30T14:10:36.626655960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:36.676355 systemd[1]: Started cri-containerd-9f1e138ed34825827f8c19e025274613619c7fa4b110d8b42792f381e0fb8a3b.scope - libcontainer container 9f1e138ed34825827f8c19e025274613619c7fa4b110d8b42792f381e0fb8a3b. Jan 30 14:10:36.678953 systemd[1]: Started cri-containerd-c3125b4595e6905296320cb3e89c1de307b5e213e94386fb5ec85e79429fda41.scope - libcontainer container c3125b4595e6905296320cb3e89c1de307b5e213e94386fb5ec85e79429fda41. Jan 30 14:10:36.685648 systemd[1]: Started cri-containerd-c586ee9aee911a7572f4652cf6035e2f028b233fda766b802dbcb81c196100a1.scope - libcontainer container c586ee9aee911a7572f4652cf6035e2f028b233fda766b802dbcb81c196100a1. Jan 30 14:10:36.745275 containerd[1698]: time="2025-01-30T14:10:36.745061096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-554d7cc729,Uid:bf634693ab4b20754b974345fa171838,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f1e138ed34825827f8c19e025274613619c7fa4b110d8b42792f381e0fb8a3b\"" Jan 30 14:10:36.745573 containerd[1698]: time="2025-01-30T14:10:36.745220497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-554d7cc729,Uid:06854d8474fcdb28faaefb8be3b7c24b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3125b4595e6905296320cb3e89c1de307b5e213e94386fb5ec85e79429fda41\"" Jan 30 14:10:36.752734 containerd[1698]: time="2025-01-30T14:10:36.752654833Z" level=info msg="CreateContainer within sandbox \"c3125b4595e6905296320cb3e89c1de307b5e213e94386fb5ec85e79429fda41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:10:36.754879 containerd[1698]: time="2025-01-30T14:10:36.754781837Z" level=info msg="CreateContainer within sandbox \"9f1e138ed34825827f8c19e025274613619c7fa4b110d8b42792f381e0fb8a3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:10:36.760284 containerd[1698]: time="2025-01-30T14:10:36.759880568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-554d7cc729,Uid:42091109dee8187b44f65516c6d1723e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c586ee9aee911a7572f4652cf6035e2f028b233fda766b802dbcb81c196100a1\"" Jan 30 14:10:36.764594 containerd[1698]: time="2025-01-30T14:10:36.764449858Z" level=info msg="CreateContainer within sandbox \"c586ee9aee911a7572f4652cf6035e2f028b233fda766b802dbcb81c196100a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:10:36.827521 containerd[1698]: time="2025-01-30T14:10:36.827318354Z" level=info msg="CreateContainer within sandbox \"c3125b4595e6905296320cb3e89c1de307b5e213e94386fb5ec85e79429fda41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5b2be5489eedd0f352e09198ca510d9732ed351501b8ac233366f3b2f357b6e\"" Jan 30 14:10:36.829154 containerd[1698]: time="2025-01-30T14:10:36.828513917Z" level=info msg="StartContainer for \"e5b2be5489eedd0f352e09198ca510d9732ed351501b8ac233366f3b2f357b6e\"" Jan 30 14:10:36.831284 kubelet[2873]: E0130 14:10:36.830905 2873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-554d7cc729?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="3.2s" Jan 30 14:10:36.839914 containerd[1698]: time="2025-01-30T14:10:36.839861022Z" level=info msg="CreateContainer within sandbox \"c586ee9aee911a7572f4652cf6035e2f028b233fda766b802dbcb81c196100a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49afcb4e5291dfd89fe37825f9a89371a378c39ad99a91c04558971625d925a7\"" Jan 30 14:10:36.841356 containerd[1698]: time="2025-01-30T14:10:36.840926064Z" level=info msg="StartContainer for \"49afcb4e5291dfd89fe37825f9a89371a378c39ad99a91c04558971625d925a7\"" Jan 30 14:10:36.848262 containerd[1698]: time="2025-01-30T14:10:36.848207560Z" level=info msg="CreateContainer within sandbox \"9f1e138ed34825827f8c19e025274613619c7fa4b110d8b42792f381e0fb8a3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32d6e8d5906547a827d84b4794ea7a2e314d7dbafe1364001f962e0946cf005b\"" Jan 30 14:10:36.849646 containerd[1698]: time="2025-01-30T14:10:36.849599363Z" level=info msg="StartContainer for \"32d6e8d5906547a827d84b4794ea7a2e314d7dbafe1364001f962e0946cf005b\"" Jan 30 14:10:36.861370 systemd[1]: Started cri-containerd-e5b2be5489eedd0f352e09198ca510d9732ed351501b8ac233366f3b2f357b6e.scope - libcontainer container e5b2be5489eedd0f352e09198ca510d9732ed351501b8ac233366f3b2f357b6e. Jan 30 14:10:36.889382 systemd[1]: Started cri-containerd-49afcb4e5291dfd89fe37825f9a89371a378c39ad99a91c04558971625d925a7.scope - libcontainer container 49afcb4e5291dfd89fe37825f9a89371a378c39ad99a91c04558971625d925a7. Jan 30 14:10:36.903383 systemd[1]: Started cri-containerd-32d6e8d5906547a827d84b4794ea7a2e314d7dbafe1364001f962e0946cf005b.scope - libcontainer container 32d6e8d5906547a827d84b4794ea7a2e314d7dbafe1364001f962e0946cf005b. Jan 30 14:10:36.919960 kubelet[2873]: W0130 14:10:36.919442 2873 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:36.919960 kubelet[2873]: E0130 14:10:36.919750 2873 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-554d7cc729&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 30 14:10:36.933518 containerd[1698]: time="2025-01-30T14:10:36.932894343Z" level=info msg="StartContainer for \"e5b2be5489eedd0f352e09198ca510d9732ed351501b8ac233366f3b2f357b6e\" returns successfully" Jan 30 14:10:36.980473 containerd[1698]: time="2025-01-30T14:10:36.980397366Z" level=info msg="StartContainer for \"49afcb4e5291dfd89fe37825f9a89371a378c39ad99a91c04558971625d925a7\" returns successfully" Jan 30 14:10:36.988422 containerd[1698]: time="2025-01-30T14:10:36.988358703Z" level=info msg="StartContainer for \"32d6e8d5906547a827d84b4794ea7a2e314d7dbafe1364001f962e0946cf005b\" returns successfully" Jan 30 14:10:37.142425 kubelet[2873]: I0130 14:10:37.141480 2873 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:39.334449 kubelet[2873]: I0130 14:10:39.334128 2873 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:39.350667 kubelet[2873]: E0130 14:10:39.350605 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.383461 kubelet[2873]: E0130 14:10:39.383084 2873 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-554d7cc729.181f7dbd72558b1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-554d7cc729,UID:ci-4081.3.0-a-554d7cc729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-554d7cc729,},FirstTimestamp:2025-01-30 14:10:33.808800542 +0000 UTC m=+0.854454470,LastTimestamp:2025-01-30 14:10:33.808800542 +0000 UTC m=+0.854454470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-554d7cc729,}" Jan 30 14:10:39.451715 kubelet[2873]: E0130 14:10:39.451651 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.455656 kubelet[2873]: E0130 14:10:39.455374 2873 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-554d7cc729.181f7dbd736b101d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-554d7cc729,UID:ci-4081.3.0-a-554d7cc729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-554d7cc729,},FirstTimestamp:2025-01-30 14:10:33.826988061 +0000 UTC m=+0.872641989,LastTimestamp:2025-01-30 14:10:33.826988061 +0000 UTC m=+0.872641989,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-554d7cc729,}" Jan 30 14:10:39.533227 kubelet[2873]: E0130 14:10:39.532658 2873 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-554d7cc729.181f7dbd8557e158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-554d7cc729,UID:ci-4081.3.0-a-554d7cc729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-554d7cc729 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-554d7cc729,},FirstTimestamp:2025-01-30 14:10:34.127720792 +0000 UTC m=+1.173374720,LastTimestamp:2025-01-30 14:10:34.127720792 +0000 UTC m=+1.173374720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-554d7cc729,}" Jan 30 14:10:39.554113 kubelet[2873]: E0130 14:10:39.552776 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.655134 kubelet[2873]: E0130 14:10:39.655053 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.684389 kubelet[2873]: E0130 14:10:39.684240 2873 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-554d7cc729.181f7dbd8557e158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-554d7cc729,UID:ci-4081.3.0-a-554d7cc729,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-554d7cc729 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-554d7cc729,},FirstTimestamp:2025-01-30 14:10:34.127720792 +0000 UTC m=+1.173374720,LastTimestamp:2025-01-30 14:10:34.127725712 +0000 UTC m=+1.173379640,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-554d7cc729,}" Jan 30 14:10:39.755682 kubelet[2873]: E0130 14:10:39.755627 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.855806 kubelet[2873]: E0130 14:10:39.855746 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:39.956054 kubelet[2873]: E0130 14:10:39.955933 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.056923 kubelet[2873]: E0130 14:10:40.056864 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.157315 kubelet[2873]: E0130 14:10:40.157263 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.258529 kubelet[2873]: E0130 14:10:40.258358 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.359384 kubelet[2873]: E0130 14:10:40.359316 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.460242 kubelet[2873]: E0130 14:10:40.460187 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.560739 kubelet[2873]: E0130 14:10:40.560580 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.661666 kubelet[2873]: E0130 14:10:40.661615 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.762640 kubelet[2873]: E0130 14:10:40.762568 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.863583 kubelet[2873]: E0130 14:10:40.863529 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:40.964603 kubelet[2873]: E0130 14:10:40.964543 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.065221 kubelet[2873]: E0130 14:10:41.065143 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.166331 kubelet[2873]: E0130 14:10:41.166164 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.266944 kubelet[2873]: E0130 14:10:41.266879 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.367959 kubelet[2873]: E0130 14:10:41.367874 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.468943 kubelet[2873]: E0130 14:10:41.468801 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.570269 kubelet[2873]: E0130 14:10:41.570189 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.670774 kubelet[2873]: E0130 14:10:41.670721 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:41.771199 kubelet[2873]: E0130 14:10:41.771038 2873 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-554d7cc729\" not found" Jan 30 14:10:42.413639 systemd[1]: Reloading requested from client PID 3150 ('systemctl') (unit session-9.scope)... Jan 30 14:10:42.414032 systemd[1]: Reloading... Jan 30 14:10:42.550162 zram_generator::config[3191]: No configuration found. Jan 30 14:10:42.677953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:42.773067 systemd[1]: Reloading finished in 358 ms. Jan 30 14:10:42.811128 kubelet[2873]: I0130 14:10:42.809334 2873 apiserver.go:52] "Watching apiserver" Jan 30 14:10:42.812123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:42.825728 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:10:42.826155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:42.826214 systemd[1]: kubelet.service: Consumed 1.265s CPU time, 113.2M memory peak, 0B memory swap peak. Jan 30 14:10:42.834637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:42.961034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:42.971576 (kubelet)[3254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:10:43.045045 kubelet[3254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:43.045045 kubelet[3254]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:10:43.045045 kubelet[3254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:10:43.045522 kubelet[3254]: I0130 14:10:43.045136 3254 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:10:43.052557 kubelet[3254]: I0130 14:10:43.051727 3254 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:10:43.052557 kubelet[3254]: I0130 14:10:43.051779 3254 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:10:43.052557 kubelet[3254]: I0130 14:10:43.052234 3254 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:10:43.056189 kubelet[3254]: I0130 14:10:43.055690 3254 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:10:43.059035 kubelet[3254]: I0130 14:10:43.058791 3254 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:10:43.068031 kubelet[3254]: I0130 14:10:43.067990 3254 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:10:43.068317 kubelet[3254]: I0130 14:10:43.068269 3254 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:10:43.068543 kubelet[3254]: I0130 14:10:43.068316 3254 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-554d7cc729","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:10:43.068636 kubelet[3254]: I0130 14:10:43.068546 3254 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:10:43.068636 kubelet[3254]: I0130 14:10:43.068557 3254 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:10:43.068636 kubelet[3254]: I0130 14:10:43.068601 3254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:43.068758 kubelet[3254]: I0130 14:10:43.068741 3254 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:10:43.068807 kubelet[3254]: I0130 14:10:43.068761 3254 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:10:43.068807 kubelet[3254]: I0130 14:10:43.068802 3254 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:10:43.068854 kubelet[3254]: I0130 14:10:43.068826 3254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:10:43.073898 kubelet[3254]: I0130 14:10:43.073857 3254 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:10:43.074223 kubelet[3254]: I0130 14:10:43.074203 3254 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:10:43.074853 kubelet[3254]: I0130 14:10:43.074828 3254 server.go:1264] "Started kubelet" Jan 30 14:10:43.081871 kubelet[3254]: I0130 14:10:43.081807 3254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:10:43.093563 kubelet[3254]: I0130 14:10:43.092943 3254 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:10:43.097158 kubelet[3254]: I0130 14:10:43.094278 3254 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:10:43.097158 kubelet[3254]: I0130 14:10:43.095382 3254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:10:43.097158 kubelet[3254]: I0130 14:10:43.095624 3254 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:10:43.100131 kubelet[3254]: I0130 14:10:43.098529 3254 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:10:43.103150 kubelet[3254]: I0130 14:10:43.102698 3254 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:10:43.103150 kubelet[3254]: I0130 14:10:43.102897 3254 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:10:43.109444 kubelet[3254]: I0130 14:10:43.108141 3254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:10:43.109706 kubelet[3254]: I0130 14:10:43.109638 3254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:10:43.109762 kubelet[3254]: I0130 14:10:43.109732 3254 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:10:43.109800 kubelet[3254]: I0130 14:10:43.109781 3254 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:10:43.109899 kubelet[3254]: E0130 14:10:43.109863 3254 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:10:43.122157 kubelet[3254]: I0130 14:10:43.121564 3254 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:10:43.122157 kubelet[3254]: I0130 14:10:43.121702 3254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:10:43.136923 kubelet[3254]: E0130 14:10:43.136582 3254 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:10:43.141507 kubelet[3254]: I0130 14:10:43.141464 3254 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:10:43.192568 kubelet[3254]: I0130 14:10:43.192505 3254 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:10:43.192568 kubelet[3254]: I0130 14:10:43.192529 3254 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:10:43.192568 kubelet[3254]: I0130 14:10:43.192556 3254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:10:43.192771 kubelet[3254]: I0130 14:10:43.192737 3254 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:10:43.192771 kubelet[3254]: I0130 14:10:43.192749 3254 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:10:43.192771 kubelet[3254]: I0130 14:10:43.192771 3254 policy_none.go:49] "None policy: Start" Jan 30 14:10:43.193794 kubelet[3254]: I0130 14:10:43.193741 3254 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:10:43.193794 kubelet[3254]: I0130 14:10:43.193777 3254 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:10:43.193965 kubelet[3254]: I0130 14:10:43.193946 3254 state_mem.go:75] "Updated machine memory state" Jan 30 14:10:43.200623 kubelet[3254]: I0130 14:10:43.200579 3254 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:10:43.200847 kubelet[3254]: I0130 14:10:43.200792 3254 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:10:43.202663 kubelet[3254]: I0130 14:10:43.202561 3254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:10:43.207709 kubelet[3254]: I0130 14:10:43.207648 3254 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.211999 kubelet[3254]: I0130 14:10:43.210150 3254 topology_manager.go:215] "Topology Admit Handler" podUID="06854d8474fcdb28faaefb8be3b7c24b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.211999 kubelet[3254]: I0130 14:10:43.210302 3254 topology_manager.go:215] "Topology Admit Handler" podUID="42091109dee8187b44f65516c6d1723e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.211999 kubelet[3254]: I0130 14:10:43.210370 3254 topology_manager.go:215] "Topology Admit Handler" podUID="bf634693ab4b20754b974345fa171838" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.228142 kubelet[3254]: W0130 14:10:43.227934 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:10:43.235368 kubelet[3254]: W0130 14:10:43.235262 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:10:43.236592 kubelet[3254]: W0130 14:10:43.236333 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:10:43.238219 kubelet[3254]: I0130 14:10:43.237524 3254 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.238219 kubelet[3254]: I0130 14:10:43.237726 3254 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305693 kubelet[3254]: I0130 14:10:43.305639 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305693 kubelet[3254]: I0130 14:10:43.305687 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305884 kubelet[3254]: I0130 14:10:43.305713 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305884 kubelet[3254]: I0130 14:10:43.305732 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305884 kubelet[3254]: I0130 14:10:43.305752 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305884 kubelet[3254]: I0130 14:10:43.305772 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.305884 kubelet[3254]: I0130 14:10:43.305789 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06854d8474fcdb28faaefb8be3b7c24b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" (UID: \"06854d8474fcdb28faaefb8be3b7c24b\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.306149 kubelet[3254]: I0130 14:10:43.305805 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42091109dee8187b44f65516c6d1723e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-554d7cc729\" (UID: \"42091109dee8187b44f65516c6d1723e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:43.306149 kubelet[3254]: I0130 14:10:43.305857 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf634693ab4b20754b974345fa171838-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-554d7cc729\" (UID: \"bf634693ab4b20754b974345fa171838\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:44.070286 kubelet[3254]: I0130 14:10:44.069951 3254 apiserver.go:52] "Watching apiserver" Jan 30 14:10:44.104381 kubelet[3254]: I0130 14:10:44.102898 3254 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:10:44.223885 kubelet[3254]: W0130 14:10:44.223846 3254 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:10:44.224656 kubelet[3254]: E0130 14:10:44.224088 3254 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-554d7cc729\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" Jan 30 14:10:44.303967 kubelet[3254]: I0130 14:10:44.303887 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-554d7cc729" podStartSLOduration=1.3038605030000001 podStartE2EDuration="1.303860503s" podCreationTimestamp="2025-01-30 14:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:10:44.270815154 +0000 UTC m=+1.292122139" watchObservedRunningTime="2025-01-30 14:10:44.303860503 +0000 UTC m=+1.325167528" Jan 30 14:10:44.350148 kubelet[3254]: I0130 14:10:44.349900 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-554d7cc729" podStartSLOduration=1.34987736 podStartE2EDuration="1.34987736s" podCreationTimestamp="2025-01-30 14:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:10:44.309944036 +0000 UTC m=+1.331251061" watchObservedRunningTime="2025-01-30 14:10:44.34987736 +0000 UTC m=+1.371184385" Jan 30 14:10:44.391091 kubelet[3254]: I0130 14:10:44.390982 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-554d7cc729" podStartSLOduration=1.3909572049999999 podStartE2EDuration="1.390957205s" podCreationTimestamp="2025-01-30 14:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:10:44.351864044 +0000 UTC m=+1.373171069" watchObservedRunningTime="2025-01-30 14:10:44.390957205 +0000 UTC m=+1.412264190" Jan 30 14:10:48.529146 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 30 14:10:48.599426 sshd[2327]: pam_unix(sshd:session): session closed for user core Jan 30 14:10:48.604981 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:51842.service: Deactivated successfully. Jan 30 14:10:48.609864 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:10:48.610226 systemd[1]: session-9.scope: Consumed 7.124s CPU time, 186.6M memory peak, 0B memory swap peak. Jan 30 14:10:48.610972 systemd-logind[1672]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:10:48.612337 systemd-logind[1672]: Removed session 9. Jan 30 14:10:56.522633 kubelet[3254]: I0130 14:10:56.522399 3254 topology_manager.go:215] "Topology Admit Handler" podUID="356faaf8-bd09-4b96-81b0-1b7753f562e7" podNamespace="kube-system" podName="kube-proxy-s8kfm" Jan 30 14:10:56.540801 systemd[1]: Created slice kubepods-besteffort-pod356faaf8_bd09_4b96_81b0_1b7753f562e7.slice - libcontainer container kubepods-besteffort-pod356faaf8_bd09_4b96_81b0_1b7753f562e7.slice. Jan 30 14:10:56.576600 kubelet[3254]: I0130 14:10:56.576558 3254 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:10:56.577755 containerd[1698]: time="2025-01-30T14:10:56.577163804Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:10:56.578156 kubelet[3254]: I0130 14:10:56.577479 3254 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:10:56.583668 kubelet[3254]: I0130 14:10:56.583626 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qpbb\" (UniqueName: \"kubernetes.io/projected/356faaf8-bd09-4b96-81b0-1b7753f562e7-kube-api-access-8qpbb\") pod \"kube-proxy-s8kfm\" (UID: \"356faaf8-bd09-4b96-81b0-1b7753f562e7\") " pod="kube-system/kube-proxy-s8kfm" Jan 30 14:10:56.583887 kubelet[3254]: I0130 14:10:56.583870 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/356faaf8-bd09-4b96-81b0-1b7753f562e7-lib-modules\") pod \"kube-proxy-s8kfm\" (UID: \"356faaf8-bd09-4b96-81b0-1b7753f562e7\") " pod="kube-system/kube-proxy-s8kfm" Jan 30 14:10:56.583992 kubelet[3254]: I0130 14:10:56.583979 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/356faaf8-bd09-4b96-81b0-1b7753f562e7-kube-proxy\") pod \"kube-proxy-s8kfm\" (UID: \"356faaf8-bd09-4b96-81b0-1b7753f562e7\") " pod="kube-system/kube-proxy-s8kfm" Jan 30 14:10:56.584060 kubelet[3254]: I0130 14:10:56.584049 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/356faaf8-bd09-4b96-81b0-1b7753f562e7-xtables-lock\") pod \"kube-proxy-s8kfm\" (UID: \"356faaf8-bd09-4b96-81b0-1b7753f562e7\") " pod="kube-system/kube-proxy-s8kfm" Jan 30 14:10:56.701734 kubelet[3254]: E0130 14:10:56.701697 3254 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 14:10:56.701949 kubelet[3254]: E0130 14:10:56.701937 3254 projected.go:200] Error preparing data for projected volume kube-api-access-8qpbb for pod kube-system/kube-proxy-s8kfm: configmap "kube-root-ca.crt" not found Jan 30 14:10:56.702116 kubelet[3254]: E0130 14:10:56.702089 3254 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/356faaf8-bd09-4b96-81b0-1b7753f562e7-kube-api-access-8qpbb podName:356faaf8-bd09-4b96-81b0-1b7753f562e7 nodeName:}" failed. No retries permitted until 2025-01-30 14:10:57.202063226 +0000 UTC m=+14.223370251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8qpbb" (UniqueName: "kubernetes.io/projected/356faaf8-bd09-4b96-81b0-1b7753f562e7-kube-api-access-8qpbb") pod "kube-proxy-s8kfm" (UID: "356faaf8-bd09-4b96-81b0-1b7753f562e7") : configmap "kube-root-ca.crt" not found Jan 30 14:10:56.806560 kubelet[3254]: I0130 14:10:56.805851 3254 topology_manager.go:215] "Topology Admit Handler" podUID="50b7c8a0-b071-4c87-80ad-8be18c6cf0fb" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-fxxht" Jan 30 14:10:56.820247 systemd[1]: Created slice kubepods-besteffort-pod50b7c8a0_b071_4c87_80ad_8be18c6cf0fb.slice - libcontainer container kubepods-besteffort-pod50b7c8a0_b071_4c87_80ad_8be18c6cf0fb.slice. Jan 30 14:10:56.885730 kubelet[3254]: I0130 14:10:56.885637 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50b7c8a0-b071-4c87-80ad-8be18c6cf0fb-var-lib-calico\") pod \"tigera-operator-7bc55997bb-fxxht\" (UID: \"50b7c8a0-b071-4c87-80ad-8be18c6cf0fb\") " pod="tigera-operator/tigera-operator-7bc55997bb-fxxht" Jan 30 14:10:56.885923 kubelet[3254]: I0130 14:10:56.885763 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptk8\" (UniqueName: \"kubernetes.io/projected/50b7c8a0-b071-4c87-80ad-8be18c6cf0fb-kube-api-access-lptk8\") pod \"tigera-operator-7bc55997bb-fxxht\" (UID: \"50b7c8a0-b071-4c87-80ad-8be18c6cf0fb\") " pod="tigera-operator/tigera-operator-7bc55997bb-fxxht" Jan 30 14:10:57.125830 containerd[1698]: time="2025-01-30T14:10:57.125421835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fxxht,Uid:50b7c8a0-b071-4c87-80ad-8be18c6cf0fb,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:10:57.174777 containerd[1698]: time="2025-01-30T14:10:57.174626699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:57.174777 containerd[1698]: time="2025-01-30T14:10:57.174700219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:57.174777 containerd[1698]: time="2025-01-30T14:10:57.174715979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:57.175205 containerd[1698]: time="2025-01-30T14:10:57.174816379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:57.203329 systemd[1]: Started cri-containerd-8ce9fa441a20d35c67069de35e2c843d2fa55597ead4f07cf0642dadcab3e353.scope - libcontainer container 8ce9fa441a20d35c67069de35e2c843d2fa55597ead4f07cf0642dadcab3e353. Jan 30 14:10:57.243334 containerd[1698]: time="2025-01-30T14:10:57.243057082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-fxxht,Uid:50b7c8a0-b071-4c87-80ad-8be18c6cf0fb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8ce9fa441a20d35c67069de35e2c843d2fa55597ead4f07cf0642dadcab3e353\"" Jan 30 14:10:57.246645 containerd[1698]: time="2025-01-30T14:10:57.246400010Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:10:57.452336 containerd[1698]: time="2025-01-30T14:10:57.452194600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8kfm,Uid:356faaf8-bd09-4b96-81b0-1b7753f562e7,Namespace:kube-system,Attempt:0,}" Jan 30 14:10:57.492904 containerd[1698]: time="2025-01-30T14:10:57.492743868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:10:57.492904 containerd[1698]: time="2025-01-30T14:10:57.492820268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:10:57.492904 containerd[1698]: time="2025-01-30T14:10:57.492858188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:57.493259 containerd[1698]: time="2025-01-30T14:10:57.492968268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:10:57.512359 systemd[1]: Started cri-containerd-91f08fa7e8124aa5670bfb1f8b7f36fdcac05169a96a45912dd7edbaeea4d4c3.scope - libcontainer container 91f08fa7e8124aa5670bfb1f8b7f36fdcac05169a96a45912dd7edbaeea4d4c3. Jan 30 14:10:57.539016 containerd[1698]: time="2025-01-30T14:10:57.538636464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8kfm,Uid:356faaf8-bd09-4b96-81b0-1b7753f562e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"91f08fa7e8124aa5670bfb1f8b7f36fdcac05169a96a45912dd7edbaeea4d4c3\"" Jan 30 14:10:57.543777 containerd[1698]: time="2025-01-30T14:10:57.543715393Z" level=info msg="CreateContainer within sandbox \"91f08fa7e8124aa5670bfb1f8b7f36fdcac05169a96a45912dd7edbaeea4d4c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:10:57.585948 containerd[1698]: time="2025-01-30T14:10:57.585795063Z" level=info msg="CreateContainer within sandbox \"91f08fa7e8124aa5670bfb1f8b7f36fdcac05169a96a45912dd7edbaeea4d4c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87b71306ef25d07571788f4130d1fdc1cd694b2a97a86e55b8aec5995afb9709\"" Jan 30 14:10:57.588088 containerd[1698]: time="2025-01-30T14:10:57.587066465Z" level=info msg="StartContainer for \"87b71306ef25d07571788f4130d1fdc1cd694b2a97a86e55b8aec5995afb9709\"" Jan 30 14:10:57.617361 systemd[1]: Started cri-containerd-87b71306ef25d07571788f4130d1fdc1cd694b2a97a86e55b8aec5995afb9709.scope - libcontainer container 87b71306ef25d07571788f4130d1fdc1cd694b2a97a86e55b8aec5995afb9709. Jan 30 14:10:57.654410 containerd[1698]: time="2025-01-30T14:10:57.654334257Z" level=info msg="StartContainer for \"87b71306ef25d07571788f4130d1fdc1cd694b2a97a86e55b8aec5995afb9709\" returns successfully" Jan 30 14:10:59.124021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295446164.mount: Deactivated successfully. Jan 30 14:10:59.536296 containerd[1698]: time="2025-01-30T14:10:59.536151226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:59.538658 containerd[1698]: time="2025-01-30T14:10:59.538437350Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:10:59.542880 containerd[1698]: time="2025-01-30T14:10:59.542840917Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:59.548544 containerd[1698]: time="2025-01-30T14:10:59.548456247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:10:59.549423 containerd[1698]: time="2025-01-30T14:10:59.549243088Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.302782238s" Jan 30 14:10:59.549423 containerd[1698]: time="2025-01-30T14:10:59.549284088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:10:59.553401 containerd[1698]: time="2025-01-30T14:10:59.553341175Z" level=info msg="CreateContainer within sandbox \"8ce9fa441a20d35c67069de35e2c843d2fa55597ead4f07cf0642dadcab3e353\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:10:59.585600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174895496.mount: Deactivated successfully. Jan 30 14:10:59.596438 containerd[1698]: time="2025-01-30T14:10:59.596335686Z" level=info msg="CreateContainer within sandbox \"8ce9fa441a20d35c67069de35e2c843d2fa55597ead4f07cf0642dadcab3e353\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0623d9f39f8cf1ff546a39f9dfd1f2bbfd64c9de193fca61324a447bc485f909\"" Jan 30 14:10:59.597846 containerd[1698]: time="2025-01-30T14:10:59.597371328Z" level=info msg="StartContainer for \"0623d9f39f8cf1ff546a39f9dfd1f2bbfd64c9de193fca61324a447bc485f909\"" Jan 30 14:10:59.625344 systemd[1]: Started cri-containerd-0623d9f39f8cf1ff546a39f9dfd1f2bbfd64c9de193fca61324a447bc485f909.scope - libcontainer container 0623d9f39f8cf1ff546a39f9dfd1f2bbfd64c9de193fca61324a447bc485f909. Jan 30 14:10:59.655591 containerd[1698]: time="2025-01-30T14:10:59.655438945Z" level=info msg="StartContainer for \"0623d9f39f8cf1ff546a39f9dfd1f2bbfd64c9de193fca61324a447bc485f909\" returns successfully" Jan 30 14:11:00.217774 kubelet[3254]: I0130 14:11:00.217517 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s8kfm" podStartSLOduration=4.217496719 podStartE2EDuration="4.217496719s" podCreationTimestamp="2025-01-30 14:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:10:58.210996382 +0000 UTC m=+15.232303407" watchObservedRunningTime="2025-01-30 14:11:00.217496719 +0000 UTC m=+17.238803744" Jan 30 14:11:00.217774 kubelet[3254]: I0130 14:11:00.217652 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-fxxht" podStartSLOduration=1.912910878 podStartE2EDuration="4.21764668s" podCreationTimestamp="2025-01-30 14:10:56 +0000 UTC" firstStartedPulling="2025-01-30 14:10:57.245769528 +0000 UTC m=+14.267076513" lastFinishedPulling="2025-01-30 14:10:59.55050529 +0000 UTC m=+16.571812315" observedRunningTime="2025-01-30 14:11:00.217412759 +0000 UTC m=+17.238719784" watchObservedRunningTime="2025-01-30 14:11:00.21764668 +0000 UTC m=+17.238953705" Jan 30 14:11:04.457385 kubelet[3254]: I0130 14:11:04.457259 3254 topology_manager.go:215] "Topology Admit Handler" podUID="b7f20152-a588-439d-bae2-c025f25c28e2" podNamespace="calico-system" podName="calico-typha-9d594bb65-54f2r" Jan 30 14:11:04.468733 systemd[1]: Created slice kubepods-besteffort-podb7f20152_a588_439d_bae2_c025f25c28e2.slice - libcontainer container kubepods-besteffort-podb7f20152_a588_439d_bae2_c025f25c28e2.slice. Jan 30 14:11:04.633977 kubelet[3254]: I0130 14:11:04.633906 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b7f20152-a588-439d-bae2-c025f25c28e2-typha-certs\") pod \"calico-typha-9d594bb65-54f2r\" (UID: \"b7f20152-a588-439d-bae2-c025f25c28e2\") " pod="calico-system/calico-typha-9d594bb65-54f2r" Jan 30 14:11:04.633977 kubelet[3254]: I0130 14:11:04.633970 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5dd\" (UniqueName: \"kubernetes.io/projected/b7f20152-a588-439d-bae2-c025f25c28e2-kube-api-access-8v5dd\") pod \"calico-typha-9d594bb65-54f2r\" (UID: \"b7f20152-a588-439d-bae2-c025f25c28e2\") " pod="calico-system/calico-typha-9d594bb65-54f2r" Jan 30 14:11:04.633977 kubelet[3254]: I0130 14:11:04.633994 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f20152-a588-439d-bae2-c025f25c28e2-tigera-ca-bundle\") pod \"calico-typha-9d594bb65-54f2r\" (UID: \"b7f20152-a588-439d-bae2-c025f25c28e2\") " pod="calico-system/calico-typha-9d594bb65-54f2r" Jan 30 14:11:04.635411 kubelet[3254]: I0130 14:11:04.634834 3254 topology_manager.go:215] "Topology Admit Handler" podUID="53561ea3-b8d5-4924-9044-d18f698000b4" podNamespace="calico-system" podName="calico-node-gm4qv" Jan 30 14:11:04.648866 systemd[1]: Created slice kubepods-besteffort-pod53561ea3_b8d5_4924_9044_d18f698000b4.slice - libcontainer container kubepods-besteffort-pod53561ea3_b8d5_4924_9044_d18f698000b4.slice. Jan 30 14:11:04.737020 kubelet[3254]: I0130 14:11:04.734256 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-var-lib-calico\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737020 kubelet[3254]: I0130 14:11:04.734307 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbp2\" (UniqueName: \"kubernetes.io/projected/53561ea3-b8d5-4924-9044-d18f698000b4-kube-api-access-5kbp2\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737020 kubelet[3254]: I0130 14:11:04.734326 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-cni-log-dir\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737020 kubelet[3254]: I0130 14:11:04.734343 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-flexvol-driver-host\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737020 kubelet[3254]: I0130 14:11:04.734366 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-policysync\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737297 kubelet[3254]: I0130 14:11:04.734389 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53561ea3-b8d5-4924-9044-d18f698000b4-tigera-ca-bundle\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737297 kubelet[3254]: I0130 14:11:04.734407 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/53561ea3-b8d5-4924-9044-d18f698000b4-node-certs\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737297 kubelet[3254]: I0130 14:11:04.734424 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-var-run-calico\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737297 kubelet[3254]: I0130 14:11:04.734440 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-cni-bin-dir\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737297 kubelet[3254]: I0130 14:11:04.734472 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-cni-net-dir\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737401 kubelet[3254]: I0130 14:11:04.734670 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-lib-modules\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.737401 kubelet[3254]: I0130 14:11:04.734760 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53561ea3-b8d5-4924-9044-d18f698000b4-xtables-lock\") pod \"calico-node-gm4qv\" (UID: \"53561ea3-b8d5-4924-9044-d18f698000b4\") " pod="calico-system/calico-node-gm4qv" Jan 30 14:11:04.774709 containerd[1698]: time="2025-01-30T14:11:04.774650978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d594bb65-54f2r,Uid:b7f20152-a588-439d-bae2-c025f25c28e2,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:04.840142 kubelet[3254]: E0130 14:11:04.839701 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.840142 kubelet[3254]: W0130 14:11:04.839743 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.840142 kubelet[3254]: E0130 14:11:04.839774 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.841556 kubelet[3254]: E0130 14:11:04.840952 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.841556 kubelet[3254]: W0130 14:11:04.840977 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.841556 kubelet[3254]: E0130 14:11:04.841029 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.844082 kubelet[3254]: I0130 14:11:04.843692 3254 topology_manager.go:215] "Topology Admit Handler" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" podNamespace="calico-system" podName="csi-node-driver-sdxt7" Jan 30 14:11:04.844082 kubelet[3254]: E0130 14:11:04.843819 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.844082 kubelet[3254]: W0130 14:11:04.843843 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.844940 kubelet[3254]: E0130 14:11:04.844583 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.845280 kubelet[3254]: W0130 14:11:04.845084 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.845772 kubelet[3254]: E0130 14:11:04.845746 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.845983 kubelet[3254]: W0130 14:11:04.845964 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.846452 kubelet[3254]: E0130 14:11:04.846322 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:04.846694 kubelet[3254]: E0130 14:11:04.846586 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.846778 kubelet[3254]: W0130 14:11:04.846762 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.846857 kubelet[3254]: E0130 14:11:04.846839 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.847041 kubelet[3254]: E0130 14:11:04.846828 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.847041 kubelet[3254]: E0130 14:11:04.846856 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.847132 kubelet[3254]: E0130 14:11:04.846863 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.848080 kubelet[3254]: E0130 14:11:04.847938 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.848636 kubelet[3254]: W0130 14:11:04.848221 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.848636 kubelet[3254]: E0130 14:11:04.848312 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.848758 kubelet[3254]: E0130 14:11:04.848657 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.848758 kubelet[3254]: W0130 14:11:04.848673 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.848758 kubelet[3254]: E0130 14:11:04.848712 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.850281 kubelet[3254]: E0130 14:11:04.850221 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.850281 kubelet[3254]: W0130 14:11:04.850257 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.852807 kubelet[3254]: E0130 14:11:04.852522 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.853267 kubelet[3254]: E0130 14:11:04.853199 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.853267 kubelet[3254]: W0130 14:11:04.853235 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.853267 kubelet[3254]: E0130 14:11:04.853263 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.854840 kubelet[3254]: E0130 14:11:04.854708 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.854840 kubelet[3254]: W0130 14:11:04.854824 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.857688 kubelet[3254]: E0130 14:11:04.857238 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.857688 kubelet[3254]: E0130 14:11:04.857598 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.857688 kubelet[3254]: W0130 14:11:04.857615 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.857688 kubelet[3254]: E0130 14:11:04.857636 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.858707 containerd[1698]: time="2025-01-30T14:11:04.857055595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:04.859714 kubelet[3254]: E0130 14:11:04.859245 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.859714 kubelet[3254]: W0130 14:11:04.859495 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.859714 kubelet[3254]: E0130 14:11:04.859529 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.860783 containerd[1698]: time="2025-01-30T14:11:04.859458079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:04.861722 containerd[1698]: time="2025-01-30T14:11:04.861209923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:04.863794 containerd[1698]: time="2025-01-30T14:11:04.862071165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:04.868091 kubelet[3254]: E0130 14:11:04.866255 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.868091 kubelet[3254]: W0130 14:11:04.866289 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.868091 kubelet[3254]: E0130 14:11:04.866315 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.888919 kubelet[3254]: E0130 14:11:04.888869 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.888919 kubelet[3254]: W0130 14:11:04.888901 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.888919 kubelet[3254]: E0130 14:11:04.888926 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.899375 systemd[1]: Started cri-containerd-42f82ea6e5877e1e355935a4e82440537a2e12e4add93c577c1c2ffb11ca8d81.scope - libcontainer container 42f82ea6e5877e1e355935a4e82440537a2e12e4add93c577c1c2ffb11ca8d81. Jan 30 14:11:04.938756 kubelet[3254]: E0130 14:11:04.938591 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.938756 kubelet[3254]: W0130 14:11:04.938684 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.938756 kubelet[3254]: E0130 14:11:04.938714 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.939552 kubelet[3254]: E0130 14:11:04.939078 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.939552 kubelet[3254]: W0130 14:11:04.939140 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.939552 kubelet[3254]: E0130 14:11:04.939156 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.939552 kubelet[3254]: E0130 14:11:04.939374 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.939552 kubelet[3254]: W0130 14:11:04.939384 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.939552 kubelet[3254]: E0130 14:11:04.939394 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.939776 kubelet[3254]: E0130 14:11:04.939573 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.939776 kubelet[3254]: W0130 14:11:04.939582 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.939776 kubelet[3254]: E0130 14:11:04.939591 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.939840 kubelet[3254]: E0130 14:11:04.939798 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.939840 kubelet[3254]: W0130 14:11:04.939806 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.939840 kubelet[3254]: E0130 14:11:04.939815 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.940000 kubelet[3254]: E0130 14:11:04.939966 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.940000 kubelet[3254]: W0130 14:11:04.939981 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.940000 kubelet[3254]: E0130 14:11:04.939996 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940199 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.940854 kubelet[3254]: W0130 14:11:04.940212 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940221 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940387 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.940854 kubelet[3254]: W0130 14:11:04.940396 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940406 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940628 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.940854 kubelet[3254]: W0130 14:11:04.940636 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940644 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.940854 kubelet[3254]: E0130 14:11:04.940807 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.941204 kubelet[3254]: W0130 14:11:04.940815 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.941204 kubelet[3254]: E0130 14:11:04.940830 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.941204 kubelet[3254]: E0130 14:11:04.940988 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.941204 kubelet[3254]: W0130 14:11:04.940995 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.941204 kubelet[3254]: E0130 14:11:04.941003 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.941204 kubelet[3254]: E0130 14:11:04.941200 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.941204 kubelet[3254]: W0130 14:11:04.941209 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.941351 kubelet[3254]: E0130 14:11:04.941217 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.941435 kubelet[3254]: E0130 14:11:04.941408 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.941435 kubelet[3254]: W0130 14:11:04.941424 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.941435 kubelet[3254]: E0130 14:11:04.941434 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.941621 kubelet[3254]: E0130 14:11:04.941598 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.941664 kubelet[3254]: W0130 14:11:04.941631 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.941664 kubelet[3254]: E0130 14:11:04.941641 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.942141 kubelet[3254]: E0130 14:11:04.942089 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.942141 kubelet[3254]: W0130 14:11:04.942126 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.942141 kubelet[3254]: E0130 14:11:04.942138 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.942533 kubelet[3254]: E0130 14:11:04.942504 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.942533 kubelet[3254]: W0130 14:11:04.942526 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.942533 kubelet[3254]: E0130 14:11:04.942537 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.942785 kubelet[3254]: E0130 14:11:04.942764 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.942785 kubelet[3254]: W0130 14:11:04.942780 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.942850 kubelet[3254]: E0130 14:11:04.942789 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.943052 kubelet[3254]: E0130 14:11:04.942938 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.943052 kubelet[3254]: W0130 14:11:04.942952 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.943052 kubelet[3254]: E0130 14:11:04.942960 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.943414 kubelet[3254]: E0130 14:11:04.943251 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.943414 kubelet[3254]: W0130 14:11:04.943405 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.943500 kubelet[3254]: E0130 14:11:04.943421 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.944702 kubelet[3254]: E0130 14:11:04.944666 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.944702 kubelet[3254]: W0130 14:11:04.944690 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.944702 kubelet[3254]: E0130 14:11:04.944706 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.945067 kubelet[3254]: E0130 14:11:04.945017 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.945067 kubelet[3254]: W0130 14:11:04.945033 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.945067 kubelet[3254]: E0130 14:11:04.945043 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.945202 kubelet[3254]: I0130 14:11:04.945076 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91e5ffe9-8aa4-4615-8d9e-b9e3697da13a-kubelet-dir\") pod \"csi-node-driver-sdxt7\" (UID: \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\") " pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:04.945329 kubelet[3254]: E0130 14:11:04.945295 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.945329 kubelet[3254]: W0130 14:11:04.945313 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.945501 kubelet[3254]: E0130 14:11:04.945479 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.946339 kubelet[3254]: E0130 14:11:04.946292 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.946339 kubelet[3254]: W0130 14:11:04.946320 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.946339 kubelet[3254]: E0130 14:11:04.946343 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.946606 kubelet[3254]: E0130 14:11:04.946571 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.946606 kubelet[3254]: W0130 14:11:04.946587 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.946606 kubelet[3254]: E0130 14:11:04.946597 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.946696 kubelet[3254]: I0130 14:11:04.946629 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91e5ffe9-8aa4-4615-8d9e-b9e3697da13a-registration-dir\") pod \"csi-node-driver-sdxt7\" (UID: \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\") " pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:04.947301 kubelet[3254]: E0130 14:11:04.947262 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.947301 kubelet[3254]: W0130 14:11:04.947286 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.947301 kubelet[3254]: E0130 14:11:04.947307 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.947437 kubelet[3254]: I0130 14:11:04.947334 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdf6v\" (UniqueName: \"kubernetes.io/projected/91e5ffe9-8aa4-4615-8d9e-b9e3697da13a-kube-api-access-zdf6v\") pod \"csi-node-driver-sdxt7\" (UID: \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\") " pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:04.947615 kubelet[3254]: E0130 14:11:04.947587 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.947615 kubelet[3254]: W0130 14:11:04.947603 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.947688 kubelet[3254]: E0130 14:11:04.947619 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.947688 kubelet[3254]: I0130 14:11:04.947636 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91e5ffe9-8aa4-4615-8d9e-b9e3697da13a-varrun\") pod \"csi-node-driver-sdxt7\" (UID: \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\") " pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:04.948170 kubelet[3254]: E0130 14:11:04.948136 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.948170 kubelet[3254]: W0130 14:11:04.948156 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.948716 kubelet[3254]: E0130 14:11:04.948476 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.948716 kubelet[3254]: I0130 14:11:04.948525 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91e5ffe9-8aa4-4615-8d9e-b9e3697da13a-socket-dir\") pod \"csi-node-driver-sdxt7\" (UID: \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\") " pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:04.948940 kubelet[3254]: E0130 14:11:04.948906 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.948940 kubelet[3254]: W0130 14:11:04.948925 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.949942 kubelet[3254]: E0130 14:11:04.949202 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.949942 kubelet[3254]: E0130 14:11:04.949454 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.949942 kubelet[3254]: W0130 14:11:04.949465 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.949942 kubelet[3254]: E0130 14:11:04.949649 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.949942 kubelet[3254]: E0130 14:11:04.949904 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.949942 kubelet[3254]: W0130 14:11:04.949914 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.950179 kubelet[3254]: E0130 14:11:04.950108 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.950347 kubelet[3254]: E0130 14:11:04.950313 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.950347 kubelet[3254]: W0130 14:11:04.950332 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.950534 kubelet[3254]: E0130 14:11:04.950503 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.950808 kubelet[3254]: E0130 14:11:04.950774 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.950808 kubelet[3254]: W0130 14:11:04.950789 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.950808 kubelet[3254]: E0130 14:11:04.950802 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.951308 kubelet[3254]: E0130 14:11:04.951278 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.951308 kubelet[3254]: W0130 14:11:04.951304 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.951443 kubelet[3254]: E0130 14:11:04.951316 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.951811 kubelet[3254]: E0130 14:11:04.951774 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.951811 kubelet[3254]: W0130 14:11:04.951794 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.951811 kubelet[3254]: E0130 14:11:04.951805 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.952275 kubelet[3254]: E0130 14:11:04.952243 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:04.952275 kubelet[3254]: W0130 14:11:04.952263 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:04.952275 kubelet[3254]: E0130 14:11:04.952275 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:04.956350 containerd[1698]: time="2025-01-30T14:11:04.956283318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm4qv,Uid:53561ea3-b8d5-4924-9044-d18f698000b4,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:04.997237 containerd[1698]: time="2025-01-30T14:11:04.995766439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d594bb65-54f2r,Uid:b7f20152-a588-439d-bae2-c025f25c28e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"42f82ea6e5877e1e355935a4e82440537a2e12e4add93c577c1c2ffb11ca8d81\"" Jan 30 14:11:05.006274 containerd[1698]: time="2025-01-30T14:11:05.004855697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:11:05.028793 containerd[1698]: time="2025-01-30T14:11:05.028574146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:05.028793 containerd[1698]: time="2025-01-30T14:11:05.028641866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:05.028793 containerd[1698]: time="2025-01-30T14:11:05.028658546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:05.028793 containerd[1698]: time="2025-01-30T14:11:05.028745306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:05.050154 kubelet[3254]: E0130 14:11:05.049943 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.050154 kubelet[3254]: W0130 14:11:05.049967 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.050154 kubelet[3254]: E0130 14:11:05.050015 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.050669 kubelet[3254]: E0130 14:11:05.050647 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.050819 kubelet[3254]: W0130 14:11:05.050727 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.050819 kubelet[3254]: E0130 14:11:05.050759 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.051416 kubelet[3254]: E0130 14:11:05.051281 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.051416 kubelet[3254]: W0130 14:11:05.051403 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.051416 kubelet[3254]: E0130 14:11:05.051425 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.052067 kubelet[3254]: E0130 14:11:05.052041 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.052067 kubelet[3254]: W0130 14:11:05.052059 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.052328 kubelet[3254]: E0130 14:11:05.052303 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.053105 kubelet[3254]: E0130 14:11:05.052591 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.053105 kubelet[3254]: W0130 14:11:05.052618 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.053105 kubelet[3254]: E0130 14:11:05.052946 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.054653 kubelet[3254]: E0130 14:11:05.053514 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.054653 kubelet[3254]: W0130 14:11:05.053531 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.054653 kubelet[3254]: E0130 14:11:05.053783 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.054653 kubelet[3254]: W0130 14:11:05.053802 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.054653 kubelet[3254]: E0130 14:11:05.054398 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.054882 kubelet[3254]: W0130 14:11:05.054419 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.054882 kubelet[3254]: E0130 14:11:05.054863 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.055084 kubelet[3254]: E0130 14:11:05.054987 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.055084 kubelet[3254]: E0130 14:11:05.055062 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.055553 kubelet[3254]: E0130 14:11:05.055526 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.055553 kubelet[3254]: W0130 14:11:05.055547 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.055642 kubelet[3254]: E0130 14:11:05.055573 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.056881 kubelet[3254]: E0130 14:11:05.056521 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.056881 kubelet[3254]: W0130 14:11:05.056548 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.056881 kubelet[3254]: E0130 14:11:05.056649 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.057969 kubelet[3254]: E0130 14:11:05.057614 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.057969 kubelet[3254]: W0130 14:11:05.057639 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.058301 kubelet[3254]: E0130 14:11:05.058198 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.058637 kubelet[3254]: E0130 14:11:05.058449 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.058637 kubelet[3254]: W0130 14:11:05.058469 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.058712 kubelet[3254]: E0130 14:11:05.058650 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.060002 kubelet[3254]: E0130 14:11:05.059550 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.060002 kubelet[3254]: W0130 14:11:05.059571 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.060218 kubelet[3254]: E0130 14:11:05.060155 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.061422 kubelet[3254]: E0130 14:11:05.060900 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.061422 kubelet[3254]: W0130 14:11:05.060918 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.061422 kubelet[3254]: E0130 14:11:05.061034 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.062531 kubelet[3254]: E0130 14:11:05.062480 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.062531 kubelet[3254]: W0130 14:11:05.062503 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.064264 kubelet[3254]: E0130 14:11:05.063069 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.064264 kubelet[3254]: E0130 14:11:05.063760 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.064264 kubelet[3254]: W0130 14:11:05.063781 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.064264 kubelet[3254]: E0130 14:11:05.064066 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.064264 kubelet[3254]: W0130 14:11:05.064076 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.064484 kubelet[3254]: E0130 14:11:05.064447 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.064484 kubelet[3254]: W0130 14:11:05.064458 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.064611 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.064646 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.065008 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.066061 kubelet[3254]: W0130 14:11:05.065020 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.065058 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.065376 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.066061 kubelet[3254]: W0130 14:11:05.065388 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.065398 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066061 kubelet[3254]: E0130 14:11:05.065643 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.066061 kubelet[3254]: W0130 14:11:05.065654 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.066454 kubelet[3254]: E0130 14:11:05.065672 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066454 kubelet[3254]: E0130 14:11:05.065893 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.066454 kubelet[3254]: W0130 14:11:05.065903 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.066454 kubelet[3254]: E0130 14:11:05.065913 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.066454 kubelet[3254]: E0130 14:11:05.066230 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.067140 kubelet[3254]: E0130 14:11:05.066820 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.067140 kubelet[3254]: W0130 14:11:05.066944 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.067140 kubelet[3254]: E0130 14:11:05.066978 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.067785 kubelet[3254]: E0130 14:11:05.067735 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.067785 kubelet[3254]: W0130 14:11:05.067760 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.067902 kubelet[3254]: E0130 14:11:05.067797 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.069173 kubelet[3254]: E0130 14:11:05.068271 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.069173 kubelet[3254]: W0130 14:11:05.068290 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.069173 kubelet[3254]: E0130 14:11:05.068305 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.076350 systemd[1]: Started cri-containerd-aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80.scope - libcontainer container aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80. Jan 30 14:11:05.092194 kubelet[3254]: E0130 14:11:05.091954 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:05.092194 kubelet[3254]: W0130 14:11:05.091981 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:05.092194 kubelet[3254]: E0130 14:11:05.092017 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:05.104781 containerd[1698]: time="2025-01-30T14:11:05.104408501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm4qv,Uid:53561ea3-b8d5-4924-9044-d18f698000b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\"" Jan 30 14:11:06.113351 kubelet[3254]: E0130 14:11:06.110913 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:06.187453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526990302.mount: Deactivated successfully. Jan 30 14:11:06.663178 containerd[1698]: time="2025-01-30T14:11:06.663001793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:06.665775 containerd[1698]: time="2025-01-30T14:11:06.665720039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 14:11:06.669183 containerd[1698]: time="2025-01-30T14:11:06.669091646Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:06.673627 containerd[1698]: time="2025-01-30T14:11:06.673504414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:06.674438 containerd[1698]: time="2025-01-30T14:11:06.674259216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.669345559s" Jan 30 14:11:06.674438 containerd[1698]: time="2025-01-30T14:11:06.674307856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:11:06.676465 containerd[1698]: time="2025-01-30T14:11:06.675906979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:11:06.692345 containerd[1698]: time="2025-01-30T14:11:06.692230612Z" level=info msg="CreateContainer within sandbox \"42f82ea6e5877e1e355935a4e82440537a2e12e4add93c577c1c2ffb11ca8d81\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:11:06.738388 containerd[1698]: time="2025-01-30T14:11:06.736966422Z" level=info msg="CreateContainer within sandbox \"42f82ea6e5877e1e355935a4e82440537a2e12e4add93c577c1c2ffb11ca8d81\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c4d9b4f81534d20427fc533a7f455247a774e53a2c4f9483a1b8def0374e8900\"" Jan 30 14:11:06.739532 containerd[1698]: time="2025-01-30T14:11:06.739474987Z" level=info msg="StartContainer for \"c4d9b4f81534d20427fc533a7f455247a774e53a2c4f9483a1b8def0374e8900\"" Jan 30 14:11:06.778385 systemd[1]: Started cri-containerd-c4d9b4f81534d20427fc533a7f455247a774e53a2c4f9483a1b8def0374e8900.scope - libcontainer container c4d9b4f81534d20427fc533a7f455247a774e53a2c4f9483a1b8def0374e8900. Jan 30 14:11:06.816459 containerd[1698]: time="2025-01-30T14:11:06.816027860Z" level=info msg="StartContainer for \"c4d9b4f81534d20427fc533a7f455247a774e53a2c4f9483a1b8def0374e8900\" returns successfully" Jan 30 14:11:07.264590 kubelet[3254]: E0130 14:11:07.264547 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.264590 kubelet[3254]: W0130 14:11:07.264577 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.264590 kubelet[3254]: E0130 14:11:07.264599 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.264814 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.265616 kubelet[3254]: W0130 14:11:07.264823 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.264832 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.265076 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.265616 kubelet[3254]: W0130 14:11:07.265088 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.265121 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.265346 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.265616 kubelet[3254]: W0130 14:11:07.265356 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.265365 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.265616 kubelet[3254]: E0130 14:11:07.265605 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266222 kubelet[3254]: W0130 14:11:07.265622 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266222 kubelet[3254]: E0130 14:11:07.265634 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.266222 kubelet[3254]: E0130 14:11:07.265840 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266222 kubelet[3254]: W0130 14:11:07.265849 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266222 kubelet[3254]: E0130 14:11:07.265857 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.266222 kubelet[3254]: E0130 14:11:07.266029 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266222 kubelet[3254]: W0130 14:11:07.266120 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266222 kubelet[3254]: E0130 14:11:07.266134 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.266526 kubelet[3254]: E0130 14:11:07.266388 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266526 kubelet[3254]: W0130 14:11:07.266399 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266526 kubelet[3254]: E0130 14:11:07.266413 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.266698 kubelet[3254]: E0130 14:11:07.266672 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266698 kubelet[3254]: W0130 14:11:07.266689 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266698 kubelet[3254]: E0130 14:11:07.266698 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.266944 kubelet[3254]: E0130 14:11:07.266927 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.266995 kubelet[3254]: W0130 14:11:07.266946 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.266995 kubelet[3254]: E0130 14:11:07.266955 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.267278 kubelet[3254]: E0130 14:11:07.267260 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.267342 kubelet[3254]: W0130 14:11:07.267310 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.267342 kubelet[3254]: E0130 14:11:07.267322 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.267817 kubelet[3254]: E0130 14:11:07.267793 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.267817 kubelet[3254]: W0130 14:11:07.267811 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.268181 kubelet[3254]: E0130 14:11:07.267825 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.268815 kubelet[3254]: E0130 14:11:07.268786 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.268815 kubelet[3254]: W0130 14:11:07.268808 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.268931 kubelet[3254]: E0130 14:11:07.268824 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.269106 kubelet[3254]: E0130 14:11:07.269077 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.269310 kubelet[3254]: W0130 14:11:07.269090 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.269310 kubelet[3254]: E0130 14:11:07.269133 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.270002 kubelet[3254]: E0130 14:11:07.269803 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.270002 kubelet[3254]: W0130 14:11:07.269822 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.270002 kubelet[3254]: E0130 14:11:07.269837 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.270577 kubelet[3254]: E0130 14:11:07.270562 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.270947 kubelet[3254]: W0130 14:11:07.270675 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.270947 kubelet[3254]: E0130 14:11:07.270696 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.271232 kubelet[3254]: E0130 14:11:07.271212 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.271458 kubelet[3254]: W0130 14:11:07.271351 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.271458 kubelet[3254]: E0130 14:11:07.271386 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.271778 kubelet[3254]: E0130 14:11:07.271739 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.271778 kubelet[3254]: W0130 14:11:07.271770 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.271867 kubelet[3254]: E0130 14:11:07.271795 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.272109 kubelet[3254]: E0130 14:11:07.272074 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.272109 kubelet[3254]: W0130 14:11:07.272089 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.272210 kubelet[3254]: E0130 14:11:07.272137 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.272352 kubelet[3254]: E0130 14:11:07.272326 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.272352 kubelet[3254]: W0130 14:11:07.272339 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.272501 kubelet[3254]: E0130 14:11:07.272357 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.272626 kubelet[3254]: E0130 14:11:07.272592 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.272626 kubelet[3254]: W0130 14:11:07.272606 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.272626 kubelet[3254]: E0130 14:11:07.272623 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.272856 kubelet[3254]: E0130 14:11:07.272837 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.272856 kubelet[3254]: W0130 14:11:07.272851 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.273152 kubelet[3254]: E0130 14:11:07.272983 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.273152 kubelet[3254]: E0130 14:11:07.273000 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.273152 kubelet[3254]: W0130 14:11:07.273082 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.273254 kubelet[3254]: E0130 14:11:07.273208 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.273820 kubelet[3254]: E0130 14:11:07.273622 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.273820 kubelet[3254]: W0130 14:11:07.273643 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.273820 kubelet[3254]: E0130 14:11:07.273670 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.274486 kubelet[3254]: E0130 14:11:07.274320 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.275200 kubelet[3254]: W0130 14:11:07.275030 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.275200 kubelet[3254]: E0130 14:11:07.275113 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.275767 kubelet[3254]: E0130 14:11:07.275686 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.275767 kubelet[3254]: W0130 14:11:07.275708 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.275767 kubelet[3254]: E0130 14:11:07.275762 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.276305 kubelet[3254]: E0130 14:11:07.276181 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.276305 kubelet[3254]: W0130 14:11:07.276198 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.276305 kubelet[3254]: E0130 14:11:07.276242 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.276596 kubelet[3254]: E0130 14:11:07.276524 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.276596 kubelet[3254]: W0130 14:11:07.276568 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.276830 kubelet[3254]: E0130 14:11:07.276729 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.277994 kubelet[3254]: E0130 14:11:07.277643 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.277994 kubelet[3254]: W0130 14:11:07.277668 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.277994 kubelet[3254]: E0130 14:11:07.277701 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.278585 kubelet[3254]: E0130 14:11:07.278561 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.278706 kubelet[3254]: W0130 14:11:07.278689 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.278847 kubelet[3254]: E0130 14:11:07.278811 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.279460 kubelet[3254]: E0130 14:11:07.279222 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.279460 kubelet[3254]: W0130 14:11:07.279255 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.279460 kubelet[3254]: E0130 14:11:07.279272 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.280033 kubelet[3254]: E0130 14:11:07.279710 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.280033 kubelet[3254]: W0130 14:11:07.279733 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.280033 kubelet[3254]: E0130 14:11:07.279748 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.280291 kubelet[3254]: E0130 14:11:07.280273 3254 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:11:07.280359 kubelet[3254]: W0130 14:11:07.280346 3254 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:11:07.280424 kubelet[3254]: E0130 14:11:07.280411 3254 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:11:07.883970 containerd[1698]: time="2025-01-30T14:11:07.883910361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:07.887210 containerd[1698]: time="2025-01-30T14:11:07.887147847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 14:11:07.891217 containerd[1698]: time="2025-01-30T14:11:07.891145295Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:07.897581 containerd[1698]: time="2025-01-30T14:11:07.897527388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:07.898707 containerd[1698]: time="2025-01-30T14:11:07.898476590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.222514451s" Jan 30 14:11:07.898707 containerd[1698]: time="2025-01-30T14:11:07.898529310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:11:07.902067 containerd[1698]: time="2025-01-30T14:11:07.901823917Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:11:07.945275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887673408.mount: Deactivated successfully. Jan 30 14:11:07.963408 containerd[1698]: time="2025-01-30T14:11:07.963302200Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253\"" Jan 30 14:11:07.965527 containerd[1698]: time="2025-01-30T14:11:07.965381004Z" level=info msg="StartContainer for \"a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253\"" Jan 30 14:11:08.006382 systemd[1]: Started cri-containerd-a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253.scope - libcontainer container a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253. Jan 30 14:11:08.058163 containerd[1698]: time="2025-01-30T14:11:08.057888670Z" level=info msg="StartContainer for \"a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253\" returns successfully" Jan 30 14:11:08.066363 systemd[1]: cri-containerd-a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253.scope: Deactivated successfully. Jan 30 14:11:08.101048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253-rootfs.mount: Deactivated successfully. Jan 30 14:11:08.110651 kubelet[3254]: E0130 14:11:08.110574 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:08.418540 kubelet[3254]: I0130 14:11:08.228725 3254 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:08.418540 kubelet[3254]: I0130 14:11:08.253056 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9d594bb65-54f2r" podStartSLOduration=2.578610132 podStartE2EDuration="4.253030581s" podCreationTimestamp="2025-01-30 14:11:04 +0000 UTC" firstStartedPulling="2025-01-30 14:11:05.00122489 +0000 UTC m=+22.022531915" lastFinishedPulling="2025-01-30 14:11:06.675645219 +0000 UTC m=+23.696952364" observedRunningTime="2025-01-30 14:11:07.239016868 +0000 UTC m=+24.260323893" watchObservedRunningTime="2025-01-30 14:11:08.253030581 +0000 UTC m=+25.274337606" Jan 30 14:11:08.939916 containerd[1698]: time="2025-01-30T14:11:08.939843118Z" level=info msg="shim disconnected" id=a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253 namespace=k8s.io Jan 30 14:11:08.939916 containerd[1698]: time="2025-01-30T14:11:08.939906118Z" level=warning msg="cleaning up after shim disconnected" id=a1431ba5dc588c4300e0cef4e74cb7b115b9c3a3367e3142b472d208c8706253 namespace=k8s.io Jan 30 14:11:08.939916 containerd[1698]: time="2025-01-30T14:11:08.939917278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:11:09.239731 containerd[1698]: time="2025-01-30T14:11:09.239358518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:11:10.111193 kubelet[3254]: E0130 14:11:10.111121 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:12.111671 kubelet[3254]: E0130 14:11:12.110926 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:12.602148 containerd[1698]: time="2025-01-30T14:11:12.601771539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:12.604129 containerd[1698]: time="2025-01-30T14:11:12.603952183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:11:12.611436 containerd[1698]: time="2025-01-30T14:11:12.611357198Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:12.618834 containerd[1698]: time="2025-01-30T14:11:12.618767773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:12.620622 containerd[1698]: time="2025-01-30T14:11:12.619411414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.380001696s" Jan 30 14:11:12.620622 containerd[1698]: time="2025-01-30T14:11:12.619457494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:11:12.624526 containerd[1698]: time="2025-01-30T14:11:12.624464104Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:11:12.674805 containerd[1698]: time="2025-01-30T14:11:12.674747085Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535\"" Jan 30 14:11:12.675607 containerd[1698]: time="2025-01-30T14:11:12.675571207Z" level=info msg="StartContainer for \"c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535\"" Jan 30 14:11:12.713723 systemd[1]: Started cri-containerd-c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535.scope - libcontainer container c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535. Jan 30 14:11:12.746266 containerd[1698]: time="2025-01-30T14:11:12.746183068Z" level=info msg="StartContainer for \"c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535\" returns successfully" Jan 30 14:11:14.110707 kubelet[3254]: E0130 14:11:14.110619 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:14.179227 containerd[1698]: time="2025-01-30T14:11:14.179159121Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:11:14.183353 systemd[1]: cri-containerd-c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535.scope: Deactivated successfully. Jan 30 14:11:14.210342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535-rootfs.mount: Deactivated successfully. Jan 30 14:11:14.261751 kubelet[3254]: I0130 14:11:14.260355 3254 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:11:14.418440 kubelet[3254]: I0130 14:11:14.298683 3254 topology_manager.go:215] "Topology Admit Handler" podUID="93092ddf-3818-41db-a977-b3fd99b148bc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnkgs" Jan 30 14:11:14.418440 kubelet[3254]: I0130 14:11:14.307082 3254 topology_manager.go:215] "Topology Admit Handler" podUID="4cddc560-c51f-446e-970e-57663cf70483" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zsqbw" Jan 30 14:11:14.418440 kubelet[3254]: I0130 14:11:14.312429 3254 topology_manager.go:215] "Topology Admit Handler" podUID="8260605b-df3c-4f82-90f7-e582f91db352" podNamespace="calico-system" podName="calico-kube-controllers-5fbf6f7f87-2b92f" Jan 30 14:11:14.418440 kubelet[3254]: I0130 14:11:14.315497 3254 topology_manager.go:215] "Topology Admit Handler" podUID="c8bca355-2aac-4395-868b-47cc9d1748b5" podNamespace="calico-apiserver" podName="calico-apiserver-64c6bfd648-mrbcl" Jan 30 14:11:14.418440 kubelet[3254]: I0130 14:11:14.320628 3254 topology_manager.go:215] "Topology Admit Handler" podUID="f17557fc-31f6-483c-8d6e-4f757373509d" podNamespace="calico-apiserver" podName="calico-apiserver-64c6bfd648-vqkjj" Jan 30 14:11:14.312869 systemd[1]: Created slice kubepods-burstable-pod93092ddf_3818_41db_a977_b3fd99b148bc.slice - libcontainer container kubepods-burstable-pod93092ddf_3818_41db_a977_b3fd99b148bc.slice. Jan 30 14:11:14.329828 systemd[1]: Created slice kubepods-burstable-pod4cddc560_c51f_446e_970e_57663cf70483.slice - libcontainer container kubepods-burstable-pod4cddc560_c51f_446e_970e_57663cf70483.slice. Jan 30 14:11:14.339723 systemd[1]: Created slice kubepods-besteffort-pod8260605b_df3c_4f82_90f7_e582f91db352.slice - libcontainer container kubepods-besteffort-pod8260605b_df3c_4f82_90f7_e582f91db352.slice. Jan 30 14:11:14.351578 systemd[1]: Created slice kubepods-besteffort-podc8bca355_2aac_4395_868b_47cc9d1748b5.slice - libcontainer container kubepods-besteffort-podc8bca355_2aac_4395_868b_47cc9d1748b5.slice. Jan 30 14:11:14.361359 systemd[1]: Created slice kubepods-besteffort-podf17557fc_31f6_483c_8d6e_4f757373509d.slice - libcontainer container kubepods-besteffort-podf17557fc_31f6_483c_8d6e_4f757373509d.slice. Jan 30 14:11:14.421367 kubelet[3254]: I0130 14:11:14.420376 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cddc560-c51f-446e-970e-57663cf70483-config-volume\") pod \"coredns-7db6d8ff4d-zsqbw\" (UID: \"4cddc560-c51f-446e-970e-57663cf70483\") " pod="kube-system/coredns-7db6d8ff4d-zsqbw" Jan 30 14:11:14.421367 kubelet[3254]: I0130 14:11:14.420462 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8bca355-2aac-4395-868b-47cc9d1748b5-calico-apiserver-certs\") pod \"calico-apiserver-64c6bfd648-mrbcl\" (UID: \"c8bca355-2aac-4395-868b-47cc9d1748b5\") " pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" Jan 30 14:11:14.421367 kubelet[3254]: I0130 14:11:14.420494 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tx6r\" (UniqueName: \"kubernetes.io/projected/f17557fc-31f6-483c-8d6e-4f757373509d-kube-api-access-4tx6r\") pod \"calico-apiserver-64c6bfd648-vqkjj\" (UID: \"f17557fc-31f6-483c-8d6e-4f757373509d\") " pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" Jan 30 14:11:14.421367 kubelet[3254]: I0130 14:11:14.420531 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfglk\" (UniqueName: \"kubernetes.io/projected/93092ddf-3818-41db-a977-b3fd99b148bc-kube-api-access-jfglk\") pod \"coredns-7db6d8ff4d-nnkgs\" (UID: \"93092ddf-3818-41db-a977-b3fd99b148bc\") " pod="kube-system/coredns-7db6d8ff4d-nnkgs" Jan 30 14:11:14.421367 kubelet[3254]: I0130 14:11:14.420556 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93092ddf-3818-41db-a977-b3fd99b148bc-config-volume\") pod \"coredns-7db6d8ff4d-nnkgs\" (UID: \"93092ddf-3818-41db-a977-b3fd99b148bc\") " pod="kube-system/coredns-7db6d8ff4d-nnkgs" Jan 30 14:11:14.421657 kubelet[3254]: I0130 14:11:14.420624 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzhn\" (UniqueName: \"kubernetes.io/projected/8260605b-df3c-4f82-90f7-e582f91db352-kube-api-access-4xzhn\") pod \"calico-kube-controllers-5fbf6f7f87-2b92f\" (UID: \"8260605b-df3c-4f82-90f7-e582f91db352\") " pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" Jan 30 14:11:14.421657 kubelet[3254]: I0130 14:11:14.420645 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2shg\" (UniqueName: \"kubernetes.io/projected/c8bca355-2aac-4395-868b-47cc9d1748b5-kube-api-access-s2shg\") pod \"calico-apiserver-64c6bfd648-mrbcl\" (UID: \"c8bca355-2aac-4395-868b-47cc9d1748b5\") " pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" Jan 30 14:11:14.421657 kubelet[3254]: I0130 14:11:14.420685 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8260605b-df3c-4f82-90f7-e582f91db352-tigera-ca-bundle\") pod \"calico-kube-controllers-5fbf6f7f87-2b92f\" (UID: \"8260605b-df3c-4f82-90f7-e582f91db352\") " pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" Jan 30 14:11:14.421657 kubelet[3254]: I0130 14:11:14.420715 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr6t6\" (UniqueName: \"kubernetes.io/projected/4cddc560-c51f-446e-970e-57663cf70483-kube-api-access-lr6t6\") pod \"coredns-7db6d8ff4d-zsqbw\" (UID: \"4cddc560-c51f-446e-970e-57663cf70483\") " pod="kube-system/coredns-7db6d8ff4d-zsqbw" Jan 30 14:11:14.421657 kubelet[3254]: I0130 14:11:14.421168 3254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f17557fc-31f6-483c-8d6e-4f757373509d-calico-apiserver-certs\") pod \"calico-apiserver-64c6bfd648-vqkjj\" (UID: \"f17557fc-31f6-483c-8d6e-4f757373509d\") " pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" Jan 30 14:11:14.765130 containerd[1698]: time="2025-01-30T14:11:14.764715605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-mrbcl,Uid:c8bca355-2aac-4395-868b-47cc9d1748b5,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:11:14.766300 containerd[1698]: time="2025-01-30T14:11:14.764880846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-vqkjj,Uid:f17557fc-31f6-483c-8d6e-4f757373509d,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:11:14.766300 containerd[1698]: time="2025-01-30T14:11:14.765928767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsqbw,Uid:4cddc560-c51f-446e-970e-57663cf70483,Namespace:kube-system,Attempt:0,}" Jan 30 14:11:14.766699 containerd[1698]: time="2025-01-30T14:11:14.766661368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnkgs,Uid:93092ddf-3818-41db-a977-b3fd99b148bc,Namespace:kube-system,Attempt:0,}" Jan 30 14:11:14.767511 containerd[1698]: time="2025-01-30T14:11:14.766902409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fbf6f7f87-2b92f,Uid:8260605b-df3c-4f82-90f7-e582f91db352,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:14.936444 containerd[1698]: time="2025-01-30T14:11:14.936354956Z" level=info msg="shim disconnected" id=c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535 namespace=k8s.io Jan 30 14:11:14.936444 containerd[1698]: time="2025-01-30T14:11:14.936417276Z" level=warning msg="cleaning up after shim disconnected" id=c7a3056b2be2b98a42c272ae98d56c4ef03a6c76ba7b95e7143a65538d126535 namespace=k8s.io Jan 30 14:11:14.936444 containerd[1698]: time="2025-01-30T14:11:14.936425716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:11:15.128218 containerd[1698]: time="2025-01-30T14:11:15.128149379Z" level=error msg="Failed to destroy network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.128876 containerd[1698]: time="2025-01-30T14:11:15.128822020Z" level=error msg="encountered an error cleaning up failed sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.129372 containerd[1698]: time="2025-01-30T14:11:15.129332781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fbf6f7f87-2b92f,Uid:8260605b-df3c-4f82-90f7-e582f91db352,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.130417 kubelet[3254]: E0130 14:11:15.130356 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.131125 kubelet[3254]: E0130 14:11:15.131075 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" Jan 30 14:11:15.131263 kubelet[3254]: E0130 14:11:15.131246 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" Jan 30 14:11:15.131498 kubelet[3254]: E0130 14:11:15.131465 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fbf6f7f87-2b92f_calico-system(8260605b-df3c-4f82-90f7-e582f91db352)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fbf6f7f87-2b92f_calico-system(8260605b-df3c-4f82-90f7-e582f91db352)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" podUID="8260605b-df3c-4f82-90f7-e582f91db352" Jan 30 14:11:15.201870 containerd[1698]: time="2025-01-30T14:11:15.201710095Z" level=error msg="Failed to destroy network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.204154 containerd[1698]: time="2025-01-30T14:11:15.203374218Z" level=error msg="encountered an error cleaning up failed sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.204154 containerd[1698]: time="2025-01-30T14:11:15.203471418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsqbw,Uid:4cddc560-c51f-446e-970e-57663cf70483,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.205076 containerd[1698]: time="2025-01-30T14:11:15.205036500Z" level=error msg="Failed to destroy network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.205267 kubelet[3254]: E0130 14:11:15.205161 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.205267 kubelet[3254]: E0130 14:11:15.205228 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsqbw" Jan 30 14:11:15.205471 kubelet[3254]: E0130 14:11:15.205332 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zsqbw" Jan 30 14:11:15.208131 kubelet[3254]: E0130 14:11:15.205516 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zsqbw_kube-system(4cddc560-c51f-446e-970e-57663cf70483)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zsqbw_kube-system(4cddc560-c51f-446e-970e-57663cf70483)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsqbw" podUID="4cddc560-c51f-446e-970e-57663cf70483" Jan 30 14:11:15.210546 containerd[1698]: time="2025-01-30T14:11:15.209215907Z" level=error msg="Failed to destroy network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.214738 containerd[1698]: time="2025-01-30T14:11:15.214441995Z" level=error msg="encountered an error cleaning up failed sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.214738 containerd[1698]: time="2025-01-30T14:11:15.214528995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-vqkjj,Uid:f17557fc-31f6-483c-8d6e-4f757373509d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.214961 kubelet[3254]: E0130 14:11:15.214780 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.214961 kubelet[3254]: E0130 14:11:15.214840 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" Jan 30 14:11:15.214961 kubelet[3254]: E0130 14:11:15.214875 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" Jan 30 14:11:15.215065 kubelet[3254]: E0130 14:11:15.214938 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c6bfd648-vqkjj_calico-apiserver(f17557fc-31f6-483c-8d6e-4f757373509d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c6bfd648-vqkjj_calico-apiserver(f17557fc-31f6-483c-8d6e-4f757373509d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" podUID="f17557fc-31f6-483c-8d6e-4f757373509d" Jan 30 14:11:15.219576 containerd[1698]: time="2025-01-30T14:11:15.219425163Z" level=error msg="encountered an error cleaning up failed sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.220025 containerd[1698]: time="2025-01-30T14:11:15.219842084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnkgs,Uid:93092ddf-3818-41db-a977-b3fd99b148bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.220807 kubelet[3254]: E0130 14:11:15.220373 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.220807 kubelet[3254]: E0130 14:11:15.220446 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nnkgs" Jan 30 14:11:15.220807 kubelet[3254]: E0130 14:11:15.220470 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nnkgs" Jan 30 14:11:15.221274 kubelet[3254]: E0130 14:11:15.220523 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nnkgs_kube-system(93092ddf-3818-41db-a977-b3fd99b148bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nnkgs_kube-system(93092ddf-3818-41db-a977-b3fd99b148bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nnkgs" podUID="93092ddf-3818-41db-a977-b3fd99b148bc" Jan 30 14:11:15.223257 containerd[1698]: time="2025-01-30T14:11:15.222230488Z" level=error msg="Failed to destroy network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.223257 containerd[1698]: time="2025-01-30T14:11:15.222684648Z" level=error msg="encountered an error cleaning up failed sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.223257 containerd[1698]: time="2025-01-30T14:11:15.222744608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-mrbcl,Uid:c8bca355-2aac-4395-868b-47cc9d1748b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.224693 kubelet[3254]: E0130 14:11:15.224643 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.224814 kubelet[3254]: E0130 14:11:15.224708 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" Jan 30 14:11:15.224814 kubelet[3254]: E0130 14:11:15.224730 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" Jan 30 14:11:15.224814 kubelet[3254]: E0130 14:11:15.224778 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c6bfd648-mrbcl_calico-apiserver(c8bca355-2aac-4395-868b-47cc9d1748b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c6bfd648-mrbcl_calico-apiserver(c8bca355-2aac-4395-868b-47cc9d1748b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" podUID="c8bca355-2aac-4395-868b-47cc9d1748b5" Jan 30 14:11:15.231499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867-shm.mount: Deactivated successfully. Jan 30 14:11:15.231631 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818-shm.mount: Deactivated successfully. Jan 30 14:11:15.231684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9-shm.mount: Deactivated successfully. Jan 30 14:11:15.258991 kubelet[3254]: I0130 14:11:15.258412 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:15.259173 containerd[1698]: time="2025-01-30T14:11:15.259060586Z" level=info msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" Jan 30 14:11:15.259336 containerd[1698]: time="2025-01-30T14:11:15.259284466Z" level=info msg="Ensure that sandbox 862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9 in task-service has been cleanup successfully" Jan 30 14:11:15.261815 kubelet[3254]: I0130 14:11:15.261770 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:15.262761 containerd[1698]: time="2025-01-30T14:11:15.262704671Z" level=info msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" Jan 30 14:11:15.263337 containerd[1698]: time="2025-01-30T14:11:15.263272672Z" level=info msg="Ensure that sandbox 1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867 in task-service has been cleanup successfully" Jan 30 14:11:15.265202 kubelet[3254]: I0130 14:11:15.264865 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:15.265668 containerd[1698]: time="2025-01-30T14:11:15.265612156Z" level=info msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" Jan 30 14:11:15.268960 containerd[1698]: time="2025-01-30T14:11:15.268826521Z" level=info msg="Ensure that sandbox 8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818 in task-service has been cleanup successfully" Jan 30 14:11:15.293219 kubelet[3254]: I0130 14:11:15.291616 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:15.294800 containerd[1698]: time="2025-01-30T14:11:15.294635002Z" level=info msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" Jan 30 14:11:15.295123 containerd[1698]: time="2025-01-30T14:11:15.295083643Z" level=info msg="Ensure that sandbox 14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90 in task-service has been cleanup successfully" Jan 30 14:11:15.297797 containerd[1698]: time="2025-01-30T14:11:15.297744727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:11:15.314736 kubelet[3254]: I0130 14:11:15.312980 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:15.314893 containerd[1698]: time="2025-01-30T14:11:15.313775032Z" level=info msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" Jan 30 14:11:15.314893 containerd[1698]: time="2025-01-30T14:11:15.313973992Z" level=info msg="Ensure that sandbox 913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e in task-service has been cleanup successfully" Jan 30 14:11:15.373375 containerd[1698]: time="2025-01-30T14:11:15.373303726Z" level=error msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" failed" error="failed to destroy network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.374007 kubelet[3254]: E0130 14:11:15.373639 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:15.374007 kubelet[3254]: E0130 14:11:15.373711 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867"} Jan 30 14:11:15.374007 kubelet[3254]: E0130 14:11:15.373784 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93092ddf-3818-41db-a977-b3fd99b148bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:15.374007 kubelet[3254]: E0130 14:11:15.373807 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93092ddf-3818-41db-a977-b3fd99b148bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nnkgs" podUID="93092ddf-3818-41db-a977-b3fd99b148bc" Jan 30 14:11:15.377188 containerd[1698]: time="2025-01-30T14:11:15.377021892Z" level=error msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" failed" error="failed to destroy network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.377680 containerd[1698]: time="2025-01-30T14:11:15.377557933Z" level=error msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" failed" error="failed to destroy network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.377730 kubelet[3254]: E0130 14:11:15.377313 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:15.377730 kubelet[3254]: E0130 14:11:15.377361 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9"} Jan 30 14:11:15.377730 kubelet[3254]: E0130 14:11:15.377411 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8bca355-2aac-4395-868b-47cc9d1748b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:15.377730 kubelet[3254]: E0130 14:11:15.377434 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8bca355-2aac-4395-868b-47cc9d1748b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" podUID="c8bca355-2aac-4395-868b-47cc9d1748b5" Jan 30 14:11:15.378205 kubelet[3254]: E0130 14:11:15.377938 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:15.378362 kubelet[3254]: E0130 14:11:15.378214 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818"} Jan 30 14:11:15.378362 kubelet[3254]: E0130 14:11:15.378247 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f17557fc-31f6-483c-8d6e-4f757373509d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:15.378362 kubelet[3254]: E0130 14:11:15.378266 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f17557fc-31f6-483c-8d6e-4f757373509d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" podUID="f17557fc-31f6-483c-8d6e-4f757373509d" Jan 30 14:11:15.390264 containerd[1698]: time="2025-01-30T14:11:15.390109433Z" level=error msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" failed" error="failed to destroy network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.390555 kubelet[3254]: E0130 14:11:15.390396 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:15.391534 kubelet[3254]: E0130 14:11:15.391489 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90"} Jan 30 14:11:15.391620 kubelet[3254]: E0130 14:11:15.391553 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8260605b-df3c-4f82-90f7-e582f91db352\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:15.391620 kubelet[3254]: E0130 14:11:15.391584 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8260605b-df3c-4f82-90f7-e582f91db352\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" podUID="8260605b-df3c-4f82-90f7-e582f91db352" Jan 30 14:11:15.399513 containerd[1698]: time="2025-01-30T14:11:15.399445287Z" level=error msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" failed" error="failed to destroy network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:15.399838 kubelet[3254]: E0130 14:11:15.399778 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:15.399898 kubelet[3254]: E0130 14:11:15.399845 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e"} Jan 30 14:11:15.399898 kubelet[3254]: E0130 14:11:15.399882 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cddc560-c51f-446e-970e-57663cf70483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:15.399992 kubelet[3254]: E0130 14:11:15.399904 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cddc560-c51f-446e-970e-57663cf70483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zsqbw" podUID="4cddc560-c51f-446e-970e-57663cf70483" Jan 30 14:11:16.117391 systemd[1]: Created slice kubepods-besteffort-pod91e5ffe9_8aa4_4615_8d9e_b9e3697da13a.slice - libcontainer container kubepods-besteffort-pod91e5ffe9_8aa4_4615_8d9e_b9e3697da13a.slice. Jan 30 14:11:16.120334 containerd[1698]: time="2025-01-30T14:11:16.120244105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdxt7,Uid:91e5ffe9-8aa4-4615-8d9e-b9e3697da13a,Namespace:calico-system,Attempt:0,}" Jan 30 14:11:16.216421 containerd[1698]: time="2025-01-30T14:11:16.216167657Z" level=error msg="Failed to destroy network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:16.218923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835-shm.mount: Deactivated successfully. Jan 30 14:11:16.219606 containerd[1698]: time="2025-01-30T14:11:16.219091981Z" level=error msg="encountered an error cleaning up failed sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:16.219606 containerd[1698]: time="2025-01-30T14:11:16.219209021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdxt7,Uid:91e5ffe9-8aa4-4615-8d9e-b9e3697da13a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:16.221463 kubelet[3254]: E0130 14:11:16.219949 3254 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:16.221463 kubelet[3254]: E0130 14:11:16.220176 3254 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:16.221463 kubelet[3254]: E0130 14:11:16.220201 3254 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdxt7" Jan 30 14:11:16.222010 kubelet[3254]: E0130 14:11:16.220260 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sdxt7_calico-system(91e5ffe9-8aa4-4615-8d9e-b9e3697da13a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sdxt7_calico-system(91e5ffe9-8aa4-4615-8d9e-b9e3697da13a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:16.316443 kubelet[3254]: I0130 14:11:16.316382 3254 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:16.317704 containerd[1698]: time="2025-01-30T14:11:16.317224016Z" level=info msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" Jan 30 14:11:16.317704 containerd[1698]: time="2025-01-30T14:11:16.317417536Z" level=info msg="Ensure that sandbox 7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835 in task-service has been cleanup successfully" Jan 30 14:11:16.348662 containerd[1698]: time="2025-01-30T14:11:16.348605706Z" level=error msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" failed" error="failed to destroy network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:11:16.349176 kubelet[3254]: E0130 14:11:16.349089 3254 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:16.349277 kubelet[3254]: E0130 14:11:16.349188 3254 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835"} Jan 30 14:11:16.349277 kubelet[3254]: E0130 14:11:16.349230 3254 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:11:16.349277 kubelet[3254]: E0130 14:11:16.349253 3254 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdxt7" podUID="91e5ffe9-8aa4-4615-8d9e-b9e3697da13a" Jan 30 14:11:19.713694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935646332.mount: Deactivated successfully. Jan 30 14:11:20.002996 kubelet[3254]: I0130 14:11:20.002404 3254 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:11:20.316022 containerd[1698]: time="2025-01-30T14:11:20.315858688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:20.319199 containerd[1698]: time="2025-01-30T14:11:20.318979733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:11:20.324084 containerd[1698]: time="2025-01-30T14:11:20.324001221Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:20.330616 containerd[1698]: time="2025-01-30T14:11:20.330522792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:20.331692 containerd[1698]: time="2025-01-30T14:11:20.331248793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.032364864s" Jan 30 14:11:20.331692 containerd[1698]: time="2025-01-30T14:11:20.331294633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:11:20.348857 containerd[1698]: time="2025-01-30T14:11:20.348780460Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:11:20.384035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842390440.mount: Deactivated successfully. Jan 30 14:11:20.393980 containerd[1698]: time="2025-01-30T14:11:20.393831692Z" level=info msg="CreateContainer within sandbox \"aef7c4d17701a98dbf87b842cadb63a7d1bb9a5135d0fbdf9a4a9b9279b59e80\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709\"" Jan 30 14:11:20.394962 containerd[1698]: time="2025-01-30T14:11:20.394920653Z" level=info msg="StartContainer for \"167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709\"" Jan 30 14:11:20.419612 systemd[1]: Started cri-containerd-167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709.scope - libcontainer container 167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709. Jan 30 14:11:20.460727 containerd[1698]: time="2025-01-30T14:11:20.460663597Z" level=info msg="StartContainer for \"167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709\" returns successfully" Jan 30 14:11:20.665895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:11:20.666029 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:11:21.350411 kubelet[3254]: I0130 14:11:21.349516 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gm4qv" podStartSLOduration=2.123162461 podStartE2EDuration="17.349493111s" podCreationTimestamp="2025-01-30 14:11:04 +0000 UTC" firstStartedPulling="2025-01-30 14:11:05.105999624 +0000 UTC m=+22.127306649" lastFinishedPulling="2025-01-30 14:11:20.332330274 +0000 UTC m=+37.353637299" observedRunningTime="2025-01-30 14:11:21.34922891 +0000 UTC m=+38.370535935" watchObservedRunningTime="2025-01-30 14:11:21.349493111 +0000 UTC m=+38.370800136" Jan 30 14:11:22.419154 kernel: bpftool[4505]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:11:22.658493 systemd-networkd[1449]: vxlan.calico: Link UP Jan 30 14:11:22.658503 systemd-networkd[1449]: vxlan.calico: Gained carrier Jan 30 14:11:24.646289 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Jan 30 14:11:27.113061 containerd[1698]: time="2025-01-30T14:11:27.112992861Z" level=info msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" Jan 30 14:11:27.113817 containerd[1698]: time="2025-01-30T14:11:27.113167221Z" level=info msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.245 [INFO][4609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.245 [INFO][4609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" iface="eth0" netns="/var/run/netns/cni-4e853b20-b540-8452-09bb-071806ecdafd" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.246 [INFO][4609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" iface="eth0" netns="/var/run/netns/cni-4e853b20-b540-8452-09bb-071806ecdafd" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.246 [INFO][4609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" iface="eth0" netns="/var/run/netns/cni-4e853b20-b540-8452-09bb-071806ecdafd" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.246 [INFO][4609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.246 [INFO][4609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.278 [INFO][4626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.279 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.279 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.291 [WARNING][4626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.292 [INFO][4626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.294 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:27.301270 containerd[1698]: 2025-01-30 14:11:27.297 [INFO][4609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:27.304512 containerd[1698]: time="2025-01-30T14:11:27.301533516Z" level=info msg="TearDown network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" successfully" Jan 30 14:11:27.304512 containerd[1698]: time="2025-01-30T14:11:27.301564876Z" level=info msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" returns successfully" Jan 30 14:11:27.305547 containerd[1698]: time="2025-01-30T14:11:27.305483323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsqbw,Uid:4cddc560-c51f-446e-970e-57663cf70483,Namespace:kube-system,Attempt:1,}" Jan 30 14:11:27.306631 systemd[1]: run-netns-cni\x2d4e853b20\x2db540\x2d8452\x2d09bb\x2d071806ecdafd.mount: Deactivated successfully. Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.240 [INFO][4616] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.241 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" iface="eth0" netns="/var/run/netns/cni-a1d4d991-7dbf-02ab-fdf6-52ec5d4e542f" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.241 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" iface="eth0" netns="/var/run/netns/cni-a1d4d991-7dbf-02ab-fdf6-52ec5d4e542f" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.241 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" iface="eth0" netns="/var/run/netns/cni-a1d4d991-7dbf-02ab-fdf6-52ec5d4e542f" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.241 [INFO][4616] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.241 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.289 [INFO][4625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.290 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.294 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.313 [WARNING][4625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.313 [INFO][4625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.315 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:27.322756 containerd[1698]: 2025-01-30 14:11:27.318 [INFO][4616] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:27.322756 containerd[1698]: time="2025-01-30T14:11:27.321582111Z" level=info msg="TearDown network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" successfully" Jan 30 14:11:27.322756 containerd[1698]: time="2025-01-30T14:11:27.321626631Z" level=info msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" returns successfully" Jan 30 14:11:27.324958 containerd[1698]: time="2025-01-30T14:11:27.324374436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnkgs,Uid:93092ddf-3818-41db-a977-b3fd99b148bc,Namespace:kube-system,Attempt:1,}" Jan 30 14:11:27.325618 systemd[1]: run-netns-cni\x2da1d4d991\x2d7dbf\x2d02ab\x2dfdf6\x2d52ec5d4e542f.mount: Deactivated successfully. Jan 30 14:11:27.555221 systemd-networkd[1449]: cali198851bc01f: Link UP Jan 30 14:11:27.556011 systemd-networkd[1449]: cali198851bc01f: Gained carrier Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.462 [INFO][4639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0 coredns-7db6d8ff4d- kube-system 4cddc560-c51f-446e-970e-57663cf70483 792 0 2025-01-30 14:10:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 coredns-7db6d8ff4d-zsqbw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali198851bc01f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.462 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.493 [INFO][4651] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" HandleID="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.506 [INFO][4651] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" HandleID="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332dd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-554d7cc729", "pod":"coredns-7db6d8ff4d-zsqbw", "timestamp":"2025-01-30 14:11:27.493878977 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.506 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.506 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.506 [INFO][4651] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.509 [INFO][4651] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.514 [INFO][4651] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.519 [INFO][4651] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.522 [INFO][4651] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.526 [INFO][4651] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.526 [INFO][4651] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.528 [INFO][4651] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.535 [INFO][4651] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.547 [INFO][4651] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.193/26] block=192.168.58.192/26 handle="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.547 [INFO][4651] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.193/26] handle="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.547 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:27.579565 containerd[1698]: 2025-01-30 14:11:27.547 [INFO][4651] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.193/26] IPv6=[] ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" HandleID="k8s-pod-network.5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.550 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4cddc560-c51f-446e-970e-57663cf70483", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"coredns-7db6d8ff4d-zsqbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198851bc01f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.551 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.193/32] ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.551 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali198851bc01f ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.555 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.556 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4cddc560-c51f-446e-970e-57663cf70483", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f", Pod:"coredns-7db6d8ff4d-zsqbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198851bc01f", MAC:"b2:65:50:4f:94:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:27.580997 containerd[1698]: 2025-01-30 14:11:27.577 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zsqbw" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:28.144796 containerd[1698]: time="2025-01-30T14:11:28.144012851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:28.144796 containerd[1698]: time="2025-01-30T14:11:28.144222571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:28.145255 containerd[1698]: time="2025-01-30T14:11:28.144824812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:28.145255 containerd[1698]: time="2025-01-30T14:11:28.144974813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:28.176359 systemd[1]: Started cri-containerd-5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f.scope - libcontainer container 5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f. Jan 30 14:11:28.216208 containerd[1698]: time="2025-01-30T14:11:28.216064739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zsqbw,Uid:4cddc560-c51f-446e-970e-57663cf70483,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f\"" Jan 30 14:11:28.225040 containerd[1698]: time="2025-01-30T14:11:28.224552914Z" level=info msg="CreateContainer within sandbox \"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:11:28.273158 containerd[1698]: time="2025-01-30T14:11:28.272636999Z" level=info msg="CreateContainer within sandbox \"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b7d314d6859154823707a05f43d732be7af46095cccb8dffe2e226cdf4df0b7\"" Jan 30 14:11:28.275717 containerd[1698]: time="2025-01-30T14:11:28.275300644Z" level=info msg="StartContainer for \"4b7d314d6859154823707a05f43d732be7af46095cccb8dffe2e226cdf4df0b7\"" Jan 30 14:11:28.283617 systemd-networkd[1449]: calicbfceb87516: Link UP Jan 30 14:11:28.286015 systemd-networkd[1449]: calicbfceb87516: Gained carrier Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.167 [INFO][4679] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0 coredns-7db6d8ff4d- kube-system 93092ddf-3818-41db-a977-b3fd99b148bc 791 0 2025-01-30 14:10:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 coredns-7db6d8ff4d-nnkgs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbfceb87516 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.167 [INFO][4679] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.216 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" HandleID="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.233 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" HandleID="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-554d7cc729", "pod":"coredns-7db6d8ff4d-nnkgs", "timestamp":"2025-01-30 14:11:28.21694198 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.233 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.233 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.233 [INFO][4724] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.235 [INFO][4724] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.241 [INFO][4724] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.247 [INFO][4724] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.250 [INFO][4724] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.252 [INFO][4724] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.253 [INFO][4724] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.255 [INFO][4724] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35 Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.262 [INFO][4724] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.275 [INFO][4724] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.194/26] block=192.168.58.192/26 handle="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.275 [INFO][4724] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.194/26] handle="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.275 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:28.321950 containerd[1698]: 2025-01-30 14:11:28.275 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.194/26] IPv6=[] ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" HandleID="k8s-pod-network.4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.280 [INFO][4679] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93092ddf-3818-41db-a977-b3fd99b148bc", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"coredns-7db6d8ff4d-nnkgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbfceb87516", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.280 [INFO][4679] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.194/32] ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.280 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbfceb87516 ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.285 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.286 [INFO][4679] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93092ddf-3818-41db-a977-b3fd99b148bc", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35", Pod:"coredns-7db6d8ff4d-nnkgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbfceb87516", MAC:"ee:65:ca:79:d2:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:28.322615 containerd[1698]: 2025-01-30 14:11:28.312 [INFO][4679] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nnkgs" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:28.351424 systemd[1]: Started cri-containerd-4b7d314d6859154823707a05f43d732be7af46095cccb8dffe2e226cdf4df0b7.scope - libcontainer container 4b7d314d6859154823707a05f43d732be7af46095cccb8dffe2e226cdf4df0b7. Jan 30 14:11:28.366353 containerd[1698]: time="2025-01-30T14:11:28.366019645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:28.366353 containerd[1698]: time="2025-01-30T14:11:28.366089085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:28.366353 containerd[1698]: time="2025-01-30T14:11:28.366133005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:28.366353 containerd[1698]: time="2025-01-30T14:11:28.366239765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:28.408638 containerd[1698]: time="2025-01-30T14:11:28.407182838Z" level=info msg="StartContainer for \"4b7d314d6859154823707a05f43d732be7af46095cccb8dffe2e226cdf4df0b7\" returns successfully" Jan 30 14:11:28.407379 systemd[1]: Started cri-containerd-4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35.scope - libcontainer container 4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35. Jan 30 14:11:28.455389 containerd[1698]: time="2025-01-30T14:11:28.455305803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnkgs,Uid:93092ddf-3818-41db-a977-b3fd99b148bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35\"" Jan 30 14:11:28.473805 containerd[1698]: time="2025-01-30T14:11:28.473749196Z" level=info msg="CreateContainer within sandbox \"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:11:28.519626 containerd[1698]: time="2025-01-30T14:11:28.519572078Z" level=info msg="CreateContainer within sandbox \"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ee090065c5525afd048cc4d608bd853006d77c1a1ea4b35a14512516350139e\"" Jan 30 14:11:28.521281 containerd[1698]: time="2025-01-30T14:11:28.520318399Z" level=info msg="StartContainer for \"4ee090065c5525afd048cc4d608bd853006d77c1a1ea4b35a14512516350139e\"" Jan 30 14:11:28.548455 systemd[1]: Started cri-containerd-4ee090065c5525afd048cc4d608bd853006d77c1a1ea4b35a14512516350139e.scope - libcontainer container 4ee090065c5525afd048cc4d608bd853006d77c1a1ea4b35a14512516350139e. Jan 30 14:11:28.582301 containerd[1698]: time="2025-01-30T14:11:28.582249069Z" level=info msg="StartContainer for \"4ee090065c5525afd048cc4d608bd853006d77c1a1ea4b35a14512516350139e\" returns successfully" Jan 30 14:11:28.870265 systemd-networkd[1449]: cali198851bc01f: Gained IPv6LL Jan 30 14:11:29.113119 containerd[1698]: time="2025-01-30T14:11:29.112529650Z" level=info msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" Jan 30 14:11:29.115442 containerd[1698]: time="2025-01-30T14:11:29.115117535Z" level=info msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.183 [INFO][4891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.183 [INFO][4891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" iface="eth0" netns="/var/run/netns/cni-d3a0cdbf-6f6e-d2ef-c6c5-bc256eeb31e6" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.183 [INFO][4891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" iface="eth0" netns="/var/run/netns/cni-d3a0cdbf-6f6e-d2ef-c6c5-bc256eeb31e6" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.184 [INFO][4891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" iface="eth0" netns="/var/run/netns/cni-d3a0cdbf-6f6e-d2ef-c6c5-bc256eeb31e6" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.184 [INFO][4891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.184 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.214 [INFO][4904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.214 [INFO][4904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.214 [INFO][4904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.228 [WARNING][4904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.228 [INFO][4904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.231 [INFO][4904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:29.236008 containerd[1698]: 2025-01-30 14:11:29.234 [INFO][4891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:29.237547 containerd[1698]: time="2025-01-30T14:11:29.237079511Z" level=info msg="TearDown network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" successfully" Jan 30 14:11:29.237547 containerd[1698]: time="2025-01-30T14:11:29.237172511Z" level=info msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" returns successfully" Jan 30 14:11:29.239520 containerd[1698]: time="2025-01-30T14:11:29.239470155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fbf6f7f87-2b92f,Uid:8260605b-df3c-4f82-90f7-e582f91db352,Namespace:calico-system,Attempt:1,}" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.205 [INFO][4890] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.205 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" iface="eth0" netns="/var/run/netns/cni-02cca890-b07d-062b-d140-aa976ef35aec" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.207 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" iface="eth0" netns="/var/run/netns/cni-02cca890-b07d-062b-d140-aa976ef35aec" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.207 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" iface="eth0" netns="/var/run/netns/cni-02cca890-b07d-062b-d140-aa976ef35aec" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.207 [INFO][4890] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.207 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.236 [INFO][4909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.236 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.236 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.248 [WARNING][4909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.248 [INFO][4909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.250 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:29.254009 containerd[1698]: 2025-01-30 14:11:29.252 [INFO][4890] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:29.254470 containerd[1698]: time="2025-01-30T14:11:29.254227422Z" level=info msg="TearDown network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" successfully" Jan 30 14:11:29.254470 containerd[1698]: time="2025-01-30T14:11:29.254257862Z" level=info msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" returns successfully" Jan 30 14:11:29.255178 containerd[1698]: time="2025-01-30T14:11:29.255138943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdxt7,Uid:91e5ffe9-8aa4-4615-8d9e-b9e3697da13a,Namespace:calico-system,Attempt:1,}" Jan 30 14:11:29.309738 systemd[1]: run-netns-cni\x2d02cca890\x2db07d\x2d062b\x2dd140\x2daa976ef35aec.mount: Deactivated successfully. Jan 30 14:11:29.309852 systemd[1]: run-netns-cni\x2dd3a0cdbf\x2d6f6e\x2dd2ef\x2dc6c5\x2dbc256eeb31e6.mount: Deactivated successfully. Jan 30 14:11:29.447465 kubelet[3254]: I0130 14:11:29.447374 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zsqbw" podStartSLOduration=33.447349022 podStartE2EDuration="33.447349022s" podCreationTimestamp="2025-01-30 14:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:29.405246554 +0000 UTC m=+46.426553579" watchObservedRunningTime="2025-01-30 14:11:29.447349022 +0000 UTC m=+46.468656047" Jan 30 14:11:29.482472 kubelet[3254]: I0130 14:11:29.482391 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nnkgs" podStartSLOduration=33.482367559 podStartE2EDuration="33.482367559s" podCreationTimestamp="2025-01-30 14:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:11:29.450886548 +0000 UTC m=+46.472193573" watchObservedRunningTime="2025-01-30 14:11:29.482367559 +0000 UTC m=+46.503674544" Jan 30 14:11:29.521172 systemd-networkd[1449]: cali54a4769db32: Link UP Jan 30 14:11:29.523347 systemd-networkd[1449]: cali54a4769db32: Gained carrier Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.338 [INFO][4917] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0 calico-kube-controllers-5fbf6f7f87- calico-system 8260605b-df3c-4f82-90f7-e582f91db352 810 0 2025-01-30 14:11:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fbf6f7f87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 calico-kube-controllers-5fbf6f7f87-2b92f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali54a4769db32 [] []}} ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.338 [INFO][4917] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.392 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" HandleID="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.416 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" HandleID="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d0940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-554d7cc729", "pod":"calico-kube-controllers-5fbf6f7f87-2b92f", "timestamp":"2025-01-30 14:11:29.392024933 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.416 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.416 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.417 [INFO][4940] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.420 [INFO][4940] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.443 [INFO][4940] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.463 [INFO][4940] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.475 [INFO][4940] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.484 [INFO][4940] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.484 [INFO][4940] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.491 [INFO][4940] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3 Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.502 [INFO][4940] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4940] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.195/26] block=192.168.58.192/26 handle="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4940] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.195/26] handle="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:29.560998 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.195/26] IPv6=[] ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" HandleID="k8s-pod-network.05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.513 [INFO][4917] cni-plugin/k8s.go 386: Populated endpoint ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0", GenerateName:"calico-kube-controllers-5fbf6f7f87-", Namespace:"calico-system", SelfLink:"", UID:"8260605b-df3c-4f82-90f7-e582f91db352", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fbf6f7f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"calico-kube-controllers-5fbf6f7f87-2b92f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54a4769db32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.513 [INFO][4917] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.195/32] ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.513 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54a4769db32 ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.522 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.524 [INFO][4917] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0", GenerateName:"calico-kube-controllers-5fbf6f7f87-", Namespace:"calico-system", SelfLink:"", UID:"8260605b-df3c-4f82-90f7-e582f91db352", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fbf6f7f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3", Pod:"calico-kube-controllers-5fbf6f7f87-2b92f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54a4769db32", MAC:"8a:8a:b7:05:2e:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:29.563411 containerd[1698]: 2025-01-30 14:11:29.554 [INFO][4917] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3" Namespace="calico-system" Pod="calico-kube-controllers-5fbf6f7f87-2b92f" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:29.596831 systemd-networkd[1449]: cali2a29232c7da: Link UP Jan 30 14:11:29.598227 systemd-networkd[1449]: cali2a29232c7da: Gained carrier Jan 30 14:11:29.612722 containerd[1698]: time="2025-01-30T14:11:29.612592209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:29.612889 containerd[1698]: time="2025-01-30T14:11:29.612821050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:29.613483 containerd[1698]: time="2025-01-30T14:11:29.613054570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:29.614434 containerd[1698]: time="2025-01-30T14:11:29.614076292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.361 [INFO][4928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0 csi-node-driver- calico-system 91e5ffe9-8aa4-4615-8d9e-b9e3697da13a 811 0 2025-01-30 14:11:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 csi-node-driver-sdxt7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a29232c7da [] []}} ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.361 [INFO][4928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.435 [INFO][4947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" HandleID="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.477 [INFO][4947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" HandleID="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028eb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-554d7cc729", "pod":"csi-node-driver-sdxt7", "timestamp":"2025-01-30 14:11:29.435705924 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.477 [INFO][4947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.510 [INFO][4947] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.515 [INFO][4947] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.527 [INFO][4947] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.540 [INFO][4947] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.548 [INFO][4947] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.556 [INFO][4947] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.556 [INFO][4947] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.564 [INFO][4947] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4 Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.573 [INFO][4947] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.584 [INFO][4947] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.196/26] block=192.168.58.192/26 handle="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.585 [INFO][4947] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.196/26] handle="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.585 [INFO][4947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:29.625874 containerd[1698]: 2025-01-30 14:11:29.585 [INFO][4947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.196/26] IPv6=[] ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" HandleID="k8s-pod-network.b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.589 [INFO][4928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"csi-node-driver-sdxt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a29232c7da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.589 [INFO][4928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.196/32] ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.589 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a29232c7da ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.598 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.599 [INFO][4928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4", Pod:"csi-node-driver-sdxt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a29232c7da", MAC:"32:39:3d:b8:5e:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:29.626971 containerd[1698]: 2025-01-30 14:11:29.616 [INFO][4928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4" Namespace="calico-system" Pod="csi-node-driver-sdxt7" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:29.650324 systemd[1]: Started cri-containerd-05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3.scope - libcontainer container 05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3. Jan 30 14:11:29.684516 containerd[1698]: time="2025-01-30T14:11:29.684387605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:29.684516 containerd[1698]: time="2025-01-30T14:11:29.684454285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:29.684516 containerd[1698]: time="2025-01-30T14:11:29.684493005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:29.684821 containerd[1698]: time="2025-01-30T14:11:29.684589525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:29.705002 containerd[1698]: time="2025-01-30T14:11:29.704949238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fbf6f7f87-2b92f,Uid:8260605b-df3c-4f82-90f7-e582f91db352,Namespace:calico-system,Attempt:1,} returns sandbox id \"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3\"" Jan 30 14:11:29.707571 containerd[1698]: time="2025-01-30T14:11:29.707440482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:11:29.732360 systemd[1]: Started cri-containerd-b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4.scope - libcontainer container b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4. Jan 30 14:11:29.757517 containerd[1698]: time="2025-01-30T14:11:29.757463843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdxt7,Uid:91e5ffe9-8aa4-4615-8d9e-b9e3697da13a,Namespace:calico-system,Attempt:1,} returns sandbox id \"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4\"" Jan 30 14:11:29.766255 systemd-networkd[1449]: calicbfceb87516: Gained IPv6LL Jan 30 14:11:30.111600 containerd[1698]: time="2025-01-30T14:11:30.111533495Z" level=info msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" Jan 30 14:11:30.112035 containerd[1698]: time="2025-01-30T14:11:30.111764895Z" level=info msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.182 [INFO][5092] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.183 [INFO][5092] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" iface="eth0" netns="/var/run/netns/cni-c6f4dafd-f139-a08a-167c-342eab8a86e4" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.183 [INFO][5092] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" iface="eth0" netns="/var/run/netns/cni-c6f4dafd-f139-a08a-167c-342eab8a86e4" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.183 [INFO][5092] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" iface="eth0" netns="/var/run/netns/cni-c6f4dafd-f139-a08a-167c-342eab8a86e4" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.183 [INFO][5092] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.183 [INFO][5092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.215 [INFO][5106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.215 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.215 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.225 [WARNING][5106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.225 [INFO][5106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.227 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:30.234465 containerd[1698]: 2025-01-30 14:11:30.231 [INFO][5092] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:30.235119 containerd[1698]: time="2025-01-30T14:11:30.234738814Z" level=info msg="TearDown network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" successfully" Jan 30 14:11:30.235119 containerd[1698]: time="2025-01-30T14:11:30.234784534Z" level=info msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" returns successfully" Jan 30 14:11:30.236908 containerd[1698]: time="2025-01-30T14:11:30.236516336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-vqkjj,Uid:f17557fc-31f6-483c-8d6e-4f757373509d,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.191 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.191 [INFO][5093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" iface="eth0" netns="/var/run/netns/cni-375c22c8-f132-0433-c13c-715976bc9fd0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.192 [INFO][5093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" iface="eth0" netns="/var/run/netns/cni-375c22c8-f132-0433-c13c-715976bc9fd0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.193 [INFO][5093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" iface="eth0" netns="/var/run/netns/cni-375c22c8-f132-0433-c13c-715976bc9fd0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.193 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.193 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.228 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.228 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.228 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.242 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.242 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.244 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:30.248553 containerd[1698]: 2025-01-30 14:11:30.246 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:30.249297 containerd[1698]: time="2025-01-30T14:11:30.248812276Z" level=info msg="TearDown network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" successfully" Jan 30 14:11:30.249297 containerd[1698]: time="2025-01-30T14:11:30.248844076Z" level=info msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" returns successfully" Jan 30 14:11:30.249981 containerd[1698]: time="2025-01-30T14:11:30.249585717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-mrbcl,Uid:c8bca355-2aac-4395-868b-47cc9d1748b5,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:11:30.311417 systemd[1]: run-netns-cni\x2dc6f4dafd\x2df139\x2da08a\x2d167c\x2d342eab8a86e4.mount: Deactivated successfully. Jan 30 14:11:30.311539 systemd[1]: run-netns-cni\x2d375c22c8\x2df132\x2d0433\x2dc13c\x2d715976bc9fd0.mount: Deactivated successfully. Jan 30 14:11:30.478090 systemd-networkd[1449]: cali3c7b0eb8863: Link UP Jan 30 14:11:30.478699 systemd-networkd[1449]: cali3c7b0eb8863: Gained carrier Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.331 [INFO][5118] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0 calico-apiserver-64c6bfd648- calico-apiserver f17557fc-31f6-483c-8d6e-4f757373509d 837 0 2025-01-30 14:11:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c6bfd648 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 calico-apiserver-64c6bfd648-vqkjj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c7b0eb8863 [] []}} ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.331 [INFO][5118] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.391 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" HandleID="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.411 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" HandleID="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fa1b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-554d7cc729", "pod":"calico-apiserver-64c6bfd648-vqkjj", "timestamp":"2025-01-30 14:11:30.391229986 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.411 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.411 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.411 [INFO][5139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.415 [INFO][5139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.426 [INFO][5139] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.436 [INFO][5139] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.439 [INFO][5139] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.443 [INFO][5139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.443 [INFO][5139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.447 [INFO][5139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.454 [INFO][5139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.197/26] block=192.168.58.192/26 handle="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.197/26] handle="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:30.511831 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.197/26] IPv6=[] ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" HandleID="k8s-pod-network.e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.469 [INFO][5118] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"f17557fc-31f6-483c-8d6e-4f757373509d", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"calico-apiserver-64c6bfd648-vqkjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c7b0eb8863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.469 [INFO][5118] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.197/32] ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.470 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c7b0eb8863 ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.477 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.477 [INFO][5118] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"f17557fc-31f6-483c-8d6e-4f757373509d", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae", Pod:"calico-apiserver-64c6bfd648-vqkjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c7b0eb8863", MAC:"da:77:2d:31:53:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:30.513954 containerd[1698]: 2025-01-30 14:11:30.508 [INFO][5118] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-vqkjj" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:30.558425 containerd[1698]: time="2025-01-30T14:11:30.557656135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:30.558425 containerd[1698]: time="2025-01-30T14:11:30.557730055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:30.558425 containerd[1698]: time="2025-01-30T14:11:30.557746335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:30.558425 containerd[1698]: time="2025-01-30T14:11:30.557833055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:30.580071 systemd-networkd[1449]: cali4982d3cb99d: Link UP Jan 30 14:11:30.580967 systemd-networkd[1449]: cali4982d3cb99d: Gained carrier Jan 30 14:11:30.600412 systemd[1]: Started cri-containerd-e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae.scope - libcontainer container e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae. Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.377 [INFO][5130] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0 calico-apiserver-64c6bfd648- calico-apiserver c8bca355-2aac-4395-868b-47cc9d1748b5 838 0 2025-01-30 14:11:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c6bfd648 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-554d7cc729 calico-apiserver-64c6bfd648-mrbcl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4982d3cb99d [] []}} ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.378 [INFO][5130] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.444 [INFO][5148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" HandleID="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.465 [INFO][5148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" HandleID="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000282530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-554d7cc729", "pod":"calico-apiserver-64c6bfd648-mrbcl", "timestamp":"2025-01-30 14:11:30.444709432 +0000 UTC"}, Hostname:"ci-4081.3.0-a-554d7cc729", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.465 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.466 [INFO][5148] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-554d7cc729' Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.471 [INFO][5148] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.489 [INFO][5148] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.508 [INFO][5148] ipam/ipam.go 489: Trying affinity for 192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.515 [INFO][5148] ipam/ipam.go 155: Attempting to load block cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.520 [INFO][5148] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.521 [INFO][5148] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.525 [INFO][5148] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.536 [INFO][5148] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.554 [INFO][5148] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.58.198/26] block=192.168.58.192/26 handle="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.554 [INFO][5148] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.58.198/26] handle="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" host="ci-4081.3.0-a-554d7cc729" Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.554 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:30.615879 containerd[1698]: 2025-01-30 14:11:30.554 [INFO][5148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.58.198/26] IPv6=[] ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" HandleID="k8s-pod-network.163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.560 [INFO][5130] cni-plugin/k8s.go 386: Populated endpoint ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8bca355-2aac-4395-868b-47cc9d1748b5", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"", Pod:"calico-apiserver-64c6bfd648-mrbcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4982d3cb99d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.564 [INFO][5130] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.58.198/32] ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.564 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4982d3cb99d ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.581 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.583 [INFO][5130] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8bca355-2aac-4395-868b-47cc9d1748b5", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c", Pod:"calico-apiserver-64c6bfd648-mrbcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4982d3cb99d", MAC:"6e:6b:e7:9f:de:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:30.616512 containerd[1698]: 2025-01-30 14:11:30.613 [INFO][5130] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c" Namespace="calico-apiserver" Pod="calico-apiserver-64c6bfd648-mrbcl" WorkloadEndpoint="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:30.651343 containerd[1698]: time="2025-01-30T14:11:30.650723165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:11:30.651343 containerd[1698]: time="2025-01-30T14:11:30.650808925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:11:30.651343 containerd[1698]: time="2025-01-30T14:11:30.650824165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:30.651343 containerd[1698]: time="2025-01-30T14:11:30.650921725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:11:30.683525 systemd[1]: Started cri-containerd-163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c.scope - libcontainer container 163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c. Jan 30 14:11:30.701455 containerd[1698]: time="2025-01-30T14:11:30.701072086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-vqkjj,Uid:f17557fc-31f6-483c-8d6e-4f757373509d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae\"" Jan 30 14:11:30.726357 systemd-networkd[1449]: cali54a4769db32: Gained IPv6LL Jan 30 14:11:30.738269 containerd[1698]: time="2025-01-30T14:11:30.736653784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c6bfd648-mrbcl,Uid:c8bca355-2aac-4395-868b-47cc9d1748b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c\"" Jan 30 14:11:31.110306 systemd-networkd[1449]: cali2a29232c7da: Gained IPv6LL Jan 30 14:11:31.942334 systemd-networkd[1449]: cali4982d3cb99d: Gained IPv6LL Jan 30 14:11:32.071151 systemd-networkd[1449]: cali3c7b0eb8863: Gained IPv6LL Jan 30 14:11:33.375714 containerd[1698]: time="2025-01-30T14:11:33.375233043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:33.378164 containerd[1698]: time="2025-01-30T14:11:33.378080528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:11:33.381575 containerd[1698]: time="2025-01-30T14:11:33.381515014Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:33.385862 containerd[1698]: time="2025-01-30T14:11:33.385726500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:33.387302 containerd[1698]: time="2025-01-30T14:11:33.386614662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.67912334s" Jan 30 14:11:33.387302 containerd[1698]: time="2025-01-30T14:11:33.386669582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:11:33.390242 containerd[1698]: time="2025-01-30T14:11:33.389415826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:11:33.410240 containerd[1698]: time="2025-01-30T14:11:33.410109060Z" level=info msg="CreateContainer within sandbox \"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:11:33.451592 containerd[1698]: time="2025-01-30T14:11:33.451484206Z" level=info msg="CreateContainer within sandbox \"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95\"" Jan 30 14:11:33.454414 containerd[1698]: time="2025-01-30T14:11:33.454284811Z" level=info msg="StartContainer for \"c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95\"" Jan 30 14:11:33.491358 systemd[1]: Started cri-containerd-c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95.scope - libcontainer container c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95. Jan 30 14:11:33.534179 containerd[1698]: time="2025-01-30T14:11:33.534081860Z" level=info msg="StartContainer for \"c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95\" returns successfully" Jan 30 14:11:34.473663 systemd[1]: run-containerd-runc-k8s.io-c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95-runc.nTUUQd.mount: Deactivated successfully. Jan 30 14:11:34.557731 kubelet[3254]: I0130 14:11:34.557584 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fbf6f7f87-2b92f" podStartSLOduration=26.87619397 podStartE2EDuration="30.557448752s" podCreationTimestamp="2025-01-30 14:11:04 +0000 UTC" firstStartedPulling="2025-01-30 14:11:29.707046642 +0000 UTC m=+46.728353667" lastFinishedPulling="2025-01-30 14:11:33.388301424 +0000 UTC m=+50.409608449" observedRunningTime="2025-01-30 14:11:34.440279803 +0000 UTC m=+51.461586828" watchObservedRunningTime="2025-01-30 14:11:34.557448752 +0000 UTC m=+51.578755777" Jan 30 14:11:34.762569 containerd[1698]: time="2025-01-30T14:11:34.762397243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:34.766694 containerd[1698]: time="2025-01-30T14:11:34.766565969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:11:34.773550 containerd[1698]: time="2025-01-30T14:11:34.773401220Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:34.778208 containerd[1698]: time="2025-01-30T14:11:34.778064948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:34.779059 containerd[1698]: time="2025-01-30T14:11:34.778868509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.389395443s" Jan 30 14:11:34.779059 containerd[1698]: time="2025-01-30T14:11:34.778922829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:11:34.780605 containerd[1698]: time="2025-01-30T14:11:34.780461752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:11:34.782707 containerd[1698]: time="2025-01-30T14:11:34.782369755Z" level=info msg="CreateContainer within sandbox \"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:11:34.829994 containerd[1698]: time="2025-01-30T14:11:34.829932672Z" level=info msg="CreateContainer within sandbox \"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"38def9f3656e4aa34a8147e803f35c1388705645e4c4bb961ce3b5afb4e9b8ed\"" Jan 30 14:11:34.832589 containerd[1698]: time="2025-01-30T14:11:34.831969955Z" level=info msg="StartContainer for \"38def9f3656e4aa34a8147e803f35c1388705645e4c4bb961ce3b5afb4e9b8ed\"" Jan 30 14:11:34.871384 systemd[1]: Started cri-containerd-38def9f3656e4aa34a8147e803f35c1388705645e4c4bb961ce3b5afb4e9b8ed.scope - libcontainer container 38def9f3656e4aa34a8147e803f35c1388705645e4c4bb961ce3b5afb4e9b8ed. Jan 30 14:11:34.907156 containerd[1698]: time="2025-01-30T14:11:34.906778196Z" level=info msg="StartContainer for \"38def9f3656e4aa34a8147e803f35c1388705645e4c4bb961ce3b5afb4e9b8ed\" returns successfully" Jan 30 14:11:38.006153 containerd[1698]: time="2025-01-30T14:11:38.006028848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:38.008984 containerd[1698]: time="2025-01-30T14:11:38.008679164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:11:38.012741 containerd[1698]: time="2025-01-30T14:11:38.012685437Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:38.018064 containerd[1698]: time="2025-01-30T14:11:38.018009069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:38.019628 containerd[1698]: time="2025-01-30T14:11:38.018793668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.237054314s" Jan 30 14:11:38.019628 containerd[1698]: time="2025-01-30T14:11:38.018848708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:11:38.021055 containerd[1698]: time="2025-01-30T14:11:38.020687185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:11:38.024148 containerd[1698]: time="2025-01-30T14:11:38.023894860Z" level=info msg="CreateContainer within sandbox \"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:11:38.066040 containerd[1698]: time="2025-01-30T14:11:38.065976473Z" level=info msg="CreateContainer within sandbox \"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40f9988ae07a497fccbd7bce7dd6057c01a85ccdf30dd0fbcf2e55c06b8ef701\"" Jan 30 14:11:38.068137 containerd[1698]: time="2025-01-30T14:11:38.066758592Z" level=info msg="StartContainer for \"40f9988ae07a497fccbd7bce7dd6057c01a85ccdf30dd0fbcf2e55c06b8ef701\"" Jan 30 14:11:38.112351 systemd[1]: Started cri-containerd-40f9988ae07a497fccbd7bce7dd6057c01a85ccdf30dd0fbcf2e55c06b8ef701.scope - libcontainer container 40f9988ae07a497fccbd7bce7dd6057c01a85ccdf30dd0fbcf2e55c06b8ef701. Jan 30 14:11:38.165156 containerd[1698]: time="2025-01-30T14:11:38.165000075Z" level=info msg="StartContainer for \"40f9988ae07a497fccbd7bce7dd6057c01a85ccdf30dd0fbcf2e55c06b8ef701\" returns successfully" Jan 30 14:11:38.340940 containerd[1698]: time="2025-01-30T14:11:38.339791438Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:38.344395 containerd[1698]: time="2025-01-30T14:11:38.344339151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:11:38.346766 containerd[1698]: time="2025-01-30T14:11:38.346700147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 325.958162ms" Jan 30 14:11:38.346848 containerd[1698]: time="2025-01-30T14:11:38.346771587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:11:38.348878 containerd[1698]: time="2025-01-30T14:11:38.348370944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:11:38.351558 containerd[1698]: time="2025-01-30T14:11:38.351512979Z" level=info msg="CreateContainer within sandbox \"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:11:38.392937 containerd[1698]: time="2025-01-30T14:11:38.392880514Z" level=info msg="CreateContainer within sandbox \"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83e1b17b2f22854408c3a8dc5d0c130308aa4e788bef98d951ef34e834578aff\"" Jan 30 14:11:38.394818 containerd[1698]: time="2025-01-30T14:11:38.394769311Z" level=info msg="StartContainer for \"83e1b17b2f22854408c3a8dc5d0c130308aa4e788bef98d951ef34e834578aff\"" Jan 30 14:11:38.436394 systemd[1]: Started cri-containerd-83e1b17b2f22854408c3a8dc5d0c130308aa4e788bef98d951ef34e834578aff.scope - libcontainer container 83e1b17b2f22854408c3a8dc5d0c130308aa4e788bef98d951ef34e834578aff. Jan 30 14:11:38.520858 containerd[1698]: time="2025-01-30T14:11:38.520662111Z" level=info msg="StartContainer for \"83e1b17b2f22854408c3a8dc5d0c130308aa4e788bef98d951ef34e834578aff\" returns successfully" Jan 30 14:11:39.485881 kubelet[3254]: I0130 14:11:39.484272 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64c6bfd648-vqkjj" podStartSLOduration=29.169044492 podStartE2EDuration="36.484243226s" podCreationTimestamp="2025-01-30 14:11:03 +0000 UTC" firstStartedPulling="2025-01-30 14:11:30.704774492 +0000 UTC m=+47.726081517" lastFinishedPulling="2025-01-30 14:11:38.019973226 +0000 UTC m=+55.041280251" observedRunningTime="2025-01-30 14:11:38.468383234 +0000 UTC m=+55.489690259" watchObservedRunningTime="2025-01-30 14:11:39.484243226 +0000 UTC m=+56.505550251" Jan 30 14:11:39.788651 containerd[1698]: time="2025-01-30T14:11:39.788471384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:39.795615 containerd[1698]: time="2025-01-30T14:11:39.794834355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:11:39.799954 containerd[1698]: time="2025-01-30T14:11:39.799866845Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:39.809239 containerd[1698]: time="2025-01-30T14:11:39.809174622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:11:39.811117 containerd[1698]: time="2025-01-30T14:11:39.811033625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.462510121s" Jan 30 14:11:39.811437 containerd[1698]: time="2025-01-30T14:11:39.811381106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:11:39.816657 containerd[1698]: time="2025-01-30T14:11:39.816565635Z" level=info msg="CreateContainer within sandbox \"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:11:39.854985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671871823.mount: Deactivated successfully. Jan 30 14:11:39.864531 containerd[1698]: time="2025-01-30T14:11:39.864415083Z" level=info msg="CreateContainer within sandbox \"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6d607f751be349d901996789133d33990723210d2e8279295888bb6466549be7\"" Jan 30 14:11:39.866628 containerd[1698]: time="2025-01-30T14:11:39.866399087Z" level=info msg="StartContainer for \"6d607f751be349d901996789133d33990723210d2e8279295888bb6466549be7\"" Jan 30 14:11:39.939740 systemd[1]: Started cri-containerd-6d607f751be349d901996789133d33990723210d2e8279295888bb6466549be7.scope - libcontainer container 6d607f751be349d901996789133d33990723210d2e8279295888bb6466549be7. Jan 30 14:11:40.011650 kubelet[3254]: I0130 14:11:40.011543 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64c6bfd648-mrbcl" podStartSLOduration=29.404834279 podStartE2EDuration="37.011514113s" podCreationTimestamp="2025-01-30 14:11:03 +0000 UTC" firstStartedPulling="2025-01-30 14:11:30.741333431 +0000 UTC m=+47.762640456" lastFinishedPulling="2025-01-30 14:11:38.348013265 +0000 UTC m=+55.369320290" observedRunningTime="2025-01-30 14:11:39.486787591 +0000 UTC m=+56.508094616" watchObservedRunningTime="2025-01-30 14:11:40.011514113 +0000 UTC m=+57.032821098" Jan 30 14:11:40.026745 containerd[1698]: time="2025-01-30T14:11:40.026671700Z" level=info msg="StartContainer for \"6d607f751be349d901996789133d33990723210d2e8279295888bb6466549be7\" returns successfully" Jan 30 14:11:40.254596 kubelet[3254]: I0130 14:11:40.254487 3254 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:11:40.257417 kubelet[3254]: I0130 14:11:40.257081 3254 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:11:43.136878 containerd[1698]: time="2025-01-30T14:11:43.136432801Z" level=info msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.201 [WARNING][5534] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8bca355-2aac-4395-868b-47cc9d1748b5", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c", Pod:"calico-apiserver-64c6bfd648-mrbcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4982d3cb99d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.201 [INFO][5534] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.201 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" iface="eth0" netns="" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.201 [INFO][5534] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.201 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.246 [INFO][5540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.246 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.246 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.256 [WARNING][5540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.256 [INFO][5540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.258 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.265487 containerd[1698]: 2025-01-30 14:11:43.261 [INFO][5534] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.265487 containerd[1698]: time="2025-01-30T14:11:43.265369837Z" level=info msg="TearDown network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" successfully" Jan 30 14:11:43.265487 containerd[1698]: time="2025-01-30T14:11:43.265447477Z" level=info msg="StopPodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" returns successfully" Jan 30 14:11:43.267942 containerd[1698]: time="2025-01-30T14:11:43.267881682Z" level=info msg="RemovePodSandbox for \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" Jan 30 14:11:43.267942 containerd[1698]: time="2025-01-30T14:11:43.267929722Z" level=info msg="Forcibly stopping sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\"" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.321 [WARNING][5559] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8bca355-2aac-4395-868b-47cc9d1748b5", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"163e9457ed65134270beed8142565df0f8792d7e5c910bbe5895e4e3359de84c", Pod:"calico-apiserver-64c6bfd648-mrbcl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4982d3cb99d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.321 [INFO][5559] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.321 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" iface="eth0" netns="" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.321 [INFO][5559] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.321 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.346 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.347 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.347 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.356 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.356 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" HandleID="k8s-pod-network.862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--mrbcl-eth0" Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.358 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.364904 containerd[1698]: 2025-01-30 14:11:43.361 [INFO][5559] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9" Jan 30 14:11:43.364904 containerd[1698]: time="2025-01-30T14:11:43.363749257Z" level=info msg="TearDown network for sandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" successfully" Jan 30 14:11:43.377623 containerd[1698]: time="2025-01-30T14:11:43.377512243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:43.377795 containerd[1698]: time="2025-01-30T14:11:43.377664443Z" level=info msg="RemovePodSandbox \"862476c12aeee84d853caa2944c45ded5b950728836033b82dca04fe26f7a3c9\" returns successfully" Jan 30 14:11:43.378493 containerd[1698]: time="2025-01-30T14:11:43.378391724Z" level=info msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.445 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93092ddf-3818-41db-a977-b3fd99b148bc", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35", Pod:"coredns-7db6d8ff4d-nnkgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbfceb87516", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.445 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.445 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" iface="eth0" netns="" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.445 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.445 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.478 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.479 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.479 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.492 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.492 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.494 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.498046 containerd[1698]: 2025-01-30 14:11:43.496 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.498046 containerd[1698]: time="2025-01-30T14:11:43.497996464Z" level=info msg="TearDown network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" successfully" Jan 30 14:11:43.498046 containerd[1698]: time="2025-01-30T14:11:43.498027704Z" level=info msg="StopPodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" returns successfully" Jan 30 14:11:43.498644 containerd[1698]: time="2025-01-30T14:11:43.498589185Z" level=info msg="RemovePodSandbox for \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" Jan 30 14:11:43.498644 containerd[1698]: time="2025-01-30T14:11:43.498621825Z" level=info msg="Forcibly stopping sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\"" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.542 [WARNING][5608] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"93092ddf-3818-41db-a977-b3fd99b148bc", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"4b0ccbe6214276d52f4294409eee8d4dd2ba6fe5e924735590eb3984fb691d35", Pod:"coredns-7db6d8ff4d-nnkgs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbfceb87516", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.543 [INFO][5608] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.543 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" iface="eth0" netns="" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.543 [INFO][5608] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.543 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.569 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.569 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.569 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.582 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.582 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" HandleID="k8s-pod-network.1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--nnkgs-eth0" Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.584 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.589453 containerd[1698]: 2025-01-30 14:11:43.586 [INFO][5608] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867" Jan 30 14:11:43.589966 containerd[1698]: time="2025-01-30T14:11:43.589484231Z" level=info msg="TearDown network for sandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" successfully" Jan 30 14:11:43.598983 containerd[1698]: time="2025-01-30T14:11:43.598854688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:43.599161 containerd[1698]: time="2025-01-30T14:11:43.599011249Z" level=info msg="RemovePodSandbox \"1f647b7e606bcf60fcf46807758156da8bec9d0673a9a08ddf37607b0d31e867\" returns successfully" Jan 30 14:11:43.599655 containerd[1698]: time="2025-01-30T14:11:43.599610410Z" level=info msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.642 [WARNING][5632] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0", GenerateName:"calico-kube-controllers-5fbf6f7f87-", Namespace:"calico-system", SelfLink:"", UID:"8260605b-df3c-4f82-90f7-e582f91db352", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fbf6f7f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3", Pod:"calico-kube-controllers-5fbf6f7f87-2b92f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54a4769db32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.642 [INFO][5632] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.642 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" iface="eth0" netns="" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.642 [INFO][5632] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.642 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.674 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.675 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.675 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.684 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.684 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.686 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.689133 containerd[1698]: 2025-01-30 14:11:43.687 [INFO][5632] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.689859 containerd[1698]: time="2025-01-30T14:11:43.689240414Z" level=info msg="TearDown network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" successfully" Jan 30 14:11:43.689859 containerd[1698]: time="2025-01-30T14:11:43.689273774Z" level=info msg="StopPodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" returns successfully" Jan 30 14:11:43.690131 containerd[1698]: time="2025-01-30T14:11:43.690052536Z" level=info msg="RemovePodSandbox for \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" Jan 30 14:11:43.690185 containerd[1698]: time="2025-01-30T14:11:43.690118216Z" level=info msg="Forcibly stopping sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\"" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.733 [WARNING][5656] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0", GenerateName:"calico-kube-controllers-5fbf6f7f87-", Namespace:"calico-system", SelfLink:"", UID:"8260605b-df3c-4f82-90f7-e582f91db352", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fbf6f7f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"05105d97021e3a9f44504a29236e2873a4550522b8c73ba54980143946a742f3", Pod:"calico-kube-controllers-5fbf6f7f87-2b92f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali54a4769db32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.733 [INFO][5656] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.733 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" iface="eth0" netns="" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.733 [INFO][5656] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.733 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.755 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.755 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.755 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.764 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.764 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" HandleID="k8s-pod-network.14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--kube--controllers--5fbf6f7f87--2b92f-eth0" Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.766 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.769585 containerd[1698]: 2025-01-30 14:11:43.768 [INFO][5656] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90" Jan 30 14:11:43.769585 containerd[1698]: time="2025-01-30T14:11:43.769553761Z" level=info msg="TearDown network for sandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" successfully" Jan 30 14:11:43.779805 containerd[1698]: time="2025-01-30T14:11:43.779742540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:43.779955 containerd[1698]: time="2025-01-30T14:11:43.779867260Z" level=info msg="RemovePodSandbox \"14b8936673c7704f7a14abbb5f13c6eb2dc05c0e201ad0f88b4ca8ce5f331e90\" returns successfully" Jan 30 14:11:43.780588 containerd[1698]: time="2025-01-30T14:11:43.780555541Z" level=info msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.819 [WARNING][5680] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"f17557fc-31f6-483c-8d6e-4f757373509d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae", Pod:"calico-apiserver-64c6bfd648-vqkjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c7b0eb8863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.820 [INFO][5680] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.820 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" iface="eth0" netns="" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.820 [INFO][5680] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.820 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.841 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.841 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.841 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.852 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.852 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.855 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.858918 containerd[1698]: 2025-01-30 14:11:43.857 [INFO][5680] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.858918 containerd[1698]: time="2025-01-30T14:11:43.858753805Z" level=info msg="TearDown network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" successfully" Jan 30 14:11:43.858918 containerd[1698]: time="2025-01-30T14:11:43.858804045Z" level=info msg="StopPodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" returns successfully" Jan 30 14:11:43.859597 containerd[1698]: time="2025-01-30T14:11:43.859456046Z" level=info msg="RemovePodSandbox for \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" Jan 30 14:11:43.859597 containerd[1698]: time="2025-01-30T14:11:43.859535206Z" level=info msg="Forcibly stopping sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\"" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.910 [WARNING][5704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0", GenerateName:"calico-apiserver-64c6bfd648-", Namespace:"calico-apiserver", SelfLink:"", UID:"f17557fc-31f6-483c-8d6e-4f757373509d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c6bfd648", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"e97d9a1b35509dc2d8b9e003561b7eff6749df8843082bd3563df0e326892bae", Pod:"calico-apiserver-64c6bfd648-vqkjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c7b0eb8863", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.911 [INFO][5704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.911 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" iface="eth0" netns="" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.911 [INFO][5704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.911 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.936 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.937 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.937 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.945 [WARNING][5710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.946 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" HandleID="k8s-pod-network.8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Workload="ci--4081.3.0--a--554d7cc729-k8s-calico--apiserver--64c6bfd648--vqkjj-eth0" Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.948 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:43.951148 containerd[1698]: 2025-01-30 14:11:43.949 [INFO][5704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818" Jan 30 14:11:43.951656 containerd[1698]: time="2025-01-30T14:11:43.951191174Z" level=info msg="TearDown network for sandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" successfully" Jan 30 14:11:43.960176 containerd[1698]: time="2025-01-30T14:11:43.960062471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:43.960371 containerd[1698]: time="2025-01-30T14:11:43.960196271Z" level=info msg="RemovePodSandbox \"8ea7ce95a7297d05c283bfa02b5450f1ce16947ac028240d4e571735f79b3818\" returns successfully" Jan 30 14:11:43.960910 containerd[1698]: time="2025-01-30T14:11:43.960875872Z" level=info msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.005 [WARNING][5728] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4cddc560-c51f-446e-970e-57663cf70483", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f", Pod:"coredns-7db6d8ff4d-zsqbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198851bc01f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.005 [INFO][5728] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.005 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" iface="eth0" netns="" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.005 [INFO][5728] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.005 [INFO][5728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.028 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.028 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.028 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.038 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.038 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.040 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:44.043458 containerd[1698]: 2025-01-30 14:11:44.041 [INFO][5728] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.043458 containerd[1698]: time="2025-01-30T14:11:44.043358543Z" level=info msg="TearDown network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" successfully" Jan 30 14:11:44.043458 containerd[1698]: time="2025-01-30T14:11:44.043387743Z" level=info msg="StopPodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" returns successfully" Jan 30 14:11:44.045572 containerd[1698]: time="2025-01-30T14:11:44.043981784Z" level=info msg="RemovePodSandbox for \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" Jan 30 14:11:44.045572 containerd[1698]: time="2025-01-30T14:11:44.044020664Z" level=info msg="Forcibly stopping sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\"" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.084 [WARNING][5755] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4cddc560-c51f-446e-970e-57663cf70483", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"5e44671bd14725f7debdd57c66dd19b71322a2929b467daa6c7d56005d207e3f", Pod:"coredns-7db6d8ff4d-zsqbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali198851bc01f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.085 [INFO][5755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.085 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" iface="eth0" netns="" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.085 [INFO][5755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.085 [INFO][5755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.107 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.108 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.108 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.117 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.117 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" HandleID="k8s-pod-network.913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Workload="ci--4081.3.0--a--554d7cc729-k8s-coredns--7db6d8ff4d--zsqbw-eth0" Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.119 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:44.122505 containerd[1698]: 2025-01-30 14:11:44.121 [INFO][5755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e" Jan 30 14:11:44.122505 containerd[1698]: time="2025-01-30T14:11:44.122561368Z" level=info msg="TearDown network for sandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" successfully" Jan 30 14:11:44.142239 containerd[1698]: time="2025-01-30T14:11:44.142178244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:44.142918 containerd[1698]: time="2025-01-30T14:11:44.142270685Z" level=info msg="RemovePodSandbox \"913f7e9e5442f4aac2709ada3dbee427aa23e9bfece54840106e67608d183a3e\" returns successfully" Jan 30 14:11:44.142918 containerd[1698]: time="2025-01-30T14:11:44.142823766Z" level=info msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.189 [WARNING][5779] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4", Pod:"csi-node-driver-sdxt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a29232c7da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.189 [INFO][5779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.189 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" iface="eth0" netns="" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.189 [INFO][5779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.189 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.212 [INFO][5785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.212 [INFO][5785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.212 [INFO][5785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.223 [WARNING][5785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.223 [INFO][5785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.225 [INFO][5785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:44.228927 containerd[1698]: 2025-01-30 14:11:44.227 [INFO][5779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.230130 containerd[1698]: time="2025-01-30T14:11:44.228983243Z" level=info msg="TearDown network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" successfully" Jan 30 14:11:44.230130 containerd[1698]: time="2025-01-30T14:11:44.229018564Z" level=info msg="StopPodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" returns successfully" Jan 30 14:11:44.230130 containerd[1698]: time="2025-01-30T14:11:44.230024965Z" level=info msg="RemovePodSandbox for \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" Jan 30 14:11:44.230130 containerd[1698]: time="2025-01-30T14:11:44.230074125Z" level=info msg="Forcibly stopping sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\"" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.277 [WARNING][5803] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e5ffe9-8aa4-4615-8d9e-b9e3697da13a", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-554d7cc729", ContainerID:"b14976c15df19f511be95e98cb3119030e26bf27cc13e2b417bbb1c8a6cc2eb4", Pod:"csi-node-driver-sdxt7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a29232c7da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.277 [INFO][5803] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.277 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" iface="eth0" netns="" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.277 [INFO][5803] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.277 [INFO][5803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.298 [INFO][5809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.298 [INFO][5809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.298 [INFO][5809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.310 [WARNING][5809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.310 [INFO][5809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" HandleID="k8s-pod-network.7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Workload="ci--4081.3.0--a--554d7cc729-k8s-csi--node--driver--sdxt7-eth0" Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.312 [INFO][5809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:11:44.315766 containerd[1698]: 2025-01-30 14:11:44.314 [INFO][5803] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835" Jan 30 14:11:44.315766 containerd[1698]: time="2025-01-30T14:11:44.315721322Z" level=info msg="TearDown network for sandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" successfully" Jan 30 14:11:44.323304 containerd[1698]: time="2025-01-30T14:11:44.323233376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:11:44.323493 containerd[1698]: time="2025-01-30T14:11:44.323345816Z" level=info msg="RemovePodSandbox \"7070f091c8f9a4fbb03a970eac5e9bec12160b543a2faeda950acb7e28e88835\" returns successfully" Jan 30 14:11:47.779138 systemd[1]: run-containerd-runc-k8s.io-167ae79acb1e59ce533704d0214e1bf9d258e78e0b6df6bfcf7bea0f66d8d709-runc.yVjOZP.mount: Deactivated successfully. Jan 30 14:11:47.863892 kubelet[3254]: I0130 14:11:47.863530 3254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sdxt7" podStartSLOduration=33.810127337 podStartE2EDuration="43.863508919s" podCreationTimestamp="2025-01-30 14:11:04 +0000 UTC" firstStartedPulling="2025-01-30 14:11:29.759359406 +0000 UTC m=+46.780666431" lastFinishedPulling="2025-01-30 14:11:39.812740988 +0000 UTC m=+56.834048013" observedRunningTime="2025-01-30 14:11:40.486789624 +0000 UTC m=+57.508096649" watchObservedRunningTime="2025-01-30 14:11:47.863508919 +0000 UTC m=+64.884815944" Jan 30 14:13:14.788470 systemd[1]: run-containerd-runc-k8s.io-c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95-runc.GDXgwp.mount: Deactivated successfully. Jan 30 14:13:39.959681 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:57762.service - OpenSSH per-connection server daemon (10.200.16.10:57762). Jan 30 14:13:40.397132 sshd[6100]: Accepted publickey for core from 10.200.16.10 port 57762 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:13:40.399344 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:13:40.404805 systemd-logind[1672]: New session 10 of user core. Jan 30 14:13:40.411302 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:13:40.813005 sshd[6100]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:40.818309 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:57762.service: Deactivated successfully. Jan 30 14:13:40.822810 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:13:40.825792 systemd-logind[1672]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:13:40.826997 systemd-logind[1672]: Removed session 10. Jan 30 14:13:45.891950 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:34048.service - OpenSSH per-connection server daemon (10.200.16.10:34048). Jan 30 14:13:46.331544 sshd[6136]: Accepted publickey for core from 10.200.16.10 port 34048 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:13:46.333222 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:13:46.338373 systemd-logind[1672]: New session 11 of user core. Jan 30 14:13:46.343321 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:13:46.730682 sshd[6136]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:46.735406 systemd-logind[1672]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:13:46.735642 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:34048.service: Deactivated successfully. Jan 30 14:13:46.739187 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:13:46.742894 systemd-logind[1672]: Removed session 11. Jan 30 14:13:50.752025 systemd[1]: run-containerd-runc-k8s.io-c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95-runc.6zaHN2.mount: Deactivated successfully. Jan 30 14:13:51.821209 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:34058.service - OpenSSH per-connection server daemon (10.200.16.10:34058). Jan 30 14:13:52.272085 sshd[6189]: Accepted publickey for core from 10.200.16.10 port 34058 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:13:52.272712 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:13:52.278008 systemd-logind[1672]: New session 12 of user core. Jan 30 14:13:52.283332 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:13:52.684875 sshd[6189]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:52.688722 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:34058.service: Deactivated successfully. Jan 30 14:13:52.692369 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:13:52.695251 systemd-logind[1672]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:13:52.696462 systemd-logind[1672]: Removed session 12. Jan 30 14:13:52.776533 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:34072.service - OpenSSH per-connection server daemon (10.200.16.10:34072). Jan 30 14:13:53.210970 sshd[6203]: Accepted publickey for core from 10.200.16.10 port 34072 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:13:53.212824 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:13:53.217638 systemd-logind[1672]: New session 13 of user core. Jan 30 14:13:53.224299 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:13:53.655612 sshd[6203]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:53.659977 systemd-logind[1672]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:13:53.660245 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:34072.service: Deactivated successfully. Jan 30 14:13:53.664174 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:13:53.667442 systemd-logind[1672]: Removed session 13. Jan 30 14:13:53.740438 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:34086.service - OpenSSH per-connection server daemon (10.200.16.10:34086). Jan 30 14:13:54.175268 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 34086 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:13:54.177493 sshd[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:13:54.185149 systemd-logind[1672]: New session 14 of user core. Jan 30 14:13:54.193380 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:13:54.563323 sshd[6213]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:54.566998 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:34086.service: Deactivated successfully. Jan 30 14:13:54.570287 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:13:54.573281 systemd-logind[1672]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:13:54.575051 systemd-logind[1672]: Removed session 14. Jan 30 14:13:59.652520 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:37396.service - OpenSSH per-connection server daemon (10.200.16.10:37396). Jan 30 14:14:00.097730 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 37396 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:00.099630 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:00.106834 systemd-logind[1672]: New session 15 of user core. Jan 30 14:14:00.116431 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:14:00.500503 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:00.505162 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:37396.service: Deactivated successfully. Jan 30 14:14:00.505656 systemd-logind[1672]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:14:00.508492 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:14:00.510161 systemd-logind[1672]: Removed session 15. Jan 30 14:14:05.584491 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:37402.service - OpenSSH per-connection server daemon (10.200.16.10:37402). Jan 30 14:14:06.018473 sshd[6254]: Accepted publickey for core from 10.200.16.10 port 37402 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:06.020080 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:06.025007 systemd-logind[1672]: New session 16 of user core. Jan 30 14:14:06.031323 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:14:06.405365 sshd[6254]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:06.408821 systemd-logind[1672]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:14:06.409174 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:37402.service: Deactivated successfully. Jan 30 14:14:06.412033 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:14:06.415421 systemd-logind[1672]: Removed session 16. Jan 30 14:14:11.479373 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:56890.service - OpenSSH per-connection server daemon (10.200.16.10:56890). Jan 30 14:14:11.900012 sshd[6268]: Accepted publickey for core from 10.200.16.10 port 56890 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:11.901875 sshd[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:11.907374 systemd-logind[1672]: New session 17 of user core. Jan 30 14:14:11.911388 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:14:12.340607 sshd[6268]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:12.345639 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:14:12.348680 systemd-logind[1672]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:14:12.350315 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:56890.service: Deactivated successfully. Jan 30 14:14:12.356067 systemd-logind[1672]: Removed session 17. Jan 30 14:14:14.791114 systemd[1]: run-containerd-runc-k8s.io-c9d52a8ed570da1528be827468b6dc7ee34879172b2a5fc1b5e790a0e6e91f95-runc.su5eTn.mount: Deactivated successfully. Jan 30 14:14:17.424520 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:40140.service - OpenSSH per-connection server daemon (10.200.16.10:40140). Jan 30 14:14:17.847065 sshd[6300]: Accepted publickey for core from 10.200.16.10 port 40140 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:17.850299 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:17.859939 systemd-logind[1672]: New session 18 of user core. Jan 30 14:14:17.863294 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:14:18.249421 sshd[6300]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:18.254742 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:40140.service: Deactivated successfully. Jan 30 14:14:18.257480 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:14:18.258874 systemd-logind[1672]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:14:18.260695 systemd-logind[1672]: Removed session 18. Jan 30 14:14:18.336490 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:40150.service - OpenSSH per-connection server daemon (10.200.16.10:40150). Jan 30 14:14:18.774571 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 40150 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:18.776262 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:18.781642 systemd-logind[1672]: New session 19 of user core. Jan 30 14:14:18.785327 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:14:19.376646 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:19.381513 systemd-logind[1672]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:14:19.382280 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:40150.service: Deactivated successfully. Jan 30 14:14:19.386452 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:14:19.388591 systemd-logind[1672]: Removed session 19. Jan 30 14:14:19.456453 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:40154.service - OpenSSH per-connection server daemon (10.200.16.10:40154). Jan 30 14:14:19.873235 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 40154 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:19.874927 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:19.880157 systemd-logind[1672]: New session 20 of user core. Jan 30 14:14:19.886311 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:14:22.161508 sshd[6346]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:22.166899 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:40154.service: Deactivated successfully. Jan 30 14:14:22.166901 systemd-logind[1672]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:14:22.169799 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:14:22.170913 systemd-logind[1672]: Removed session 20. Jan 30 14:14:22.242646 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:40156.service - OpenSSH per-connection server daemon (10.200.16.10:40156). Jan 30 14:14:22.662592 sshd[6369]: Accepted publickey for core from 10.200.16.10 port 40156 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:22.664592 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:22.670135 systemd-logind[1672]: New session 21 of user core. Jan 30 14:14:22.676343 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:14:23.193299 sshd[6369]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:23.197693 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:40156.service: Deactivated successfully. Jan 30 14:14:23.201161 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:14:23.201875 systemd-logind[1672]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:14:23.203935 systemd-logind[1672]: Removed session 21. Jan 30 14:14:23.282439 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:40160.service - OpenSSH per-connection server daemon (10.200.16.10:40160). Jan 30 14:14:23.702365 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 40160 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:23.704122 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:23.708838 systemd-logind[1672]: New session 22 of user core. Jan 30 14:14:23.720371 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:14:24.091590 sshd[6380]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:24.096497 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:40160.service: Deactivated successfully. Jan 30 14:14:24.098925 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:14:24.099813 systemd-logind[1672]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:14:24.101324 systemd-logind[1672]: Removed session 22. Jan 30 14:14:29.176608 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:47240.service - OpenSSH per-connection server daemon (10.200.16.10:47240). Jan 30 14:14:29.598984 sshd[6415]: Accepted publickey for core from 10.200.16.10 port 47240 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:29.600724 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:29.605867 systemd-logind[1672]: New session 23 of user core. Jan 30 14:14:29.609366 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:14:29.980422 sshd[6415]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:29.985054 systemd-logind[1672]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:14:29.986519 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:47240.service: Deactivated successfully. Jan 30 14:14:29.990260 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:14:29.992614 systemd-logind[1672]: Removed session 23. Jan 30 14:14:35.056501 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:47252.service - OpenSSH per-connection server daemon (10.200.16.10:47252). Jan 30 14:14:35.479031 sshd[6427]: Accepted publickey for core from 10.200.16.10 port 47252 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:35.481144 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:35.488007 systemd-logind[1672]: New session 24 of user core. Jan 30 14:14:35.497338 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:14:35.851412 sshd[6427]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:35.856455 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:47252.service: Deactivated successfully. Jan 30 14:14:35.859949 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:14:35.861442 systemd-logind[1672]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:14:35.862935 systemd-logind[1672]: Removed session 24. Jan 30 14:14:40.935462 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:55508.service - OpenSSH per-connection server daemon (10.200.16.10:55508). Jan 30 14:14:41.347933 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 55508 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:41.349662 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:41.354712 systemd-logind[1672]: New session 25 of user core. Jan 30 14:14:41.363334 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:14:41.725142 sshd[6440]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:41.729122 systemd-logind[1672]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:14:41.729454 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:55508.service: Deactivated successfully. Jan 30 14:14:41.732774 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:14:41.736091 systemd-logind[1672]: Removed session 25. Jan 30 14:14:46.810466 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:53676.service - OpenSSH per-connection server daemon (10.200.16.10:53676). Jan 30 14:14:47.245617 sshd[6472]: Accepted publickey for core from 10.200.16.10 port 53676 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:47.247557 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:47.253243 systemd-logind[1672]: New session 26 of user core. Jan 30 14:14:47.259349 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:14:47.650553 sshd[6472]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:47.655898 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:53676.service: Deactivated successfully. Jan 30 14:14:47.656130 systemd-logind[1672]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:14:47.660356 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:14:47.661673 systemd-logind[1672]: Removed session 26. Jan 30 14:14:52.731795 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:53692.service - OpenSSH per-connection server daemon (10.200.16.10:53692). Jan 30 14:14:53.144347 sshd[6525]: Accepted publickey for core from 10.200.16.10 port 53692 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:53.146326 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:53.152942 systemd-logind[1672]: New session 27 of user core. Jan 30 14:14:53.155383 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:14:53.534973 sshd[6525]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:53.538448 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:53692.service: Deactivated successfully. Jan 30 14:14:53.540990 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:14:53.543765 systemd-logind[1672]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:14:53.545011 systemd-logind[1672]: Removed session 27. Jan 30 14:14:58.623624 systemd[1]: Started sshd@25-10.200.20.33:22-10.200.16.10:41786.service - OpenSSH per-connection server daemon (10.200.16.10:41786). Jan 30 14:14:59.039133 sshd[6540]: Accepted publickey for core from 10.200.16.10 port 41786 ssh2: RSA SHA256:RupaCbuZF2fYrs0zNLe4BMu5hDgJTCRY2dyVdJI+6w4 Jan 30 14:14:59.040494 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:14:59.046325 systemd-logind[1672]: New session 28 of user core. Jan 30 14:14:59.051372 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:14:59.418394 sshd[6540]: pam_unix(sshd:session): session closed for user core Jan 30 14:14:59.421569 systemd[1]: sshd@25-10.200.20.33:22-10.200.16.10:41786.service: Deactivated successfully. Jan 30 14:14:59.425226 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:14:59.426860 systemd-logind[1672]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:14:59.428394 systemd-logind[1672]: Removed session 28.