Dec 13 01:25:53.368845 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:53.368868 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:53.368876 kernel: KASLR enabled Dec 13 01:25:53.368882 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:53.368890 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:53.368895 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:53.368902 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:53.368908 kernel: random: crng init done Dec 13 01:25:53.368915 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:53.368921 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:53.368927 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368933 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368940 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:53.368947 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368954 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368960 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368967 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368975 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368982 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368988 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:53.368995 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.369001 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:53.369008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:53.369014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:53.369020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:53.369027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:53.369033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:53.369040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:53.369048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:53.369054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:53.369060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:53.369067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:53.369073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:53.369079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:53.369085 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:53.369092 kernel: Zone ranges: Dec 13 01:25:53.369098 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:53.369104 kernel: DMA32 empty Dec 13 01:25:53.369110 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:53.369117 kernel: Movable zone start for each node Dec 13 01:25:53.369127 kernel: Early memory node ranges Dec 13 01:25:53.369134 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:53.369140 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:53.371197 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:53.371208 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:53.371222 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:53.371229 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:53.371235 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:53.371243 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:53.371250 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:53.371257 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:53.371264 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:53.371270 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:53.371277 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:53.371284 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:53.371291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:53.371298 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:53.371306 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:53.371313 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:53.371320 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:53.371327 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:53.371334 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:53.371341 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:53.371347 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:53.371354 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:53.371361 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:53.371368 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:53.371374 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:53.371383 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:53.371390 kernel: alternatives: applying boot alternatives Dec 13 01:25:53.371398 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:53.371406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:53.371413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:53.371419 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:53.371426 kernel: Fallback order for Node 0: 0 Dec 13 01:25:53.371433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:53.371440 kernel: Policy zone: Normal Dec 13 01:25:53.371446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:53.371453 kernel: software IO TLB: area num 2. Dec 13 01:25:53.371462 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:53.371469 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:53.371476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:53.371482 kernel: trace event string verifier disabled Dec 13 01:25:53.371489 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:53.371497 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:53.371504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:53.371511 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:53.371518 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:53.371525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:53.371531 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:53.371539 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:53.371546 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:53.371553 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:53.371559 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:53.371566 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:53.371572 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:53.371579 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:53.371586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:53.371593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:53.371600 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:53.371607 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:53.371614 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:53.371623 kernel: Console: colour dummy device 80x25 Dec 13 01:25:53.371630 kernel: printk: console [tty1] enabled Dec 13 01:25:53.371637 kernel: ACPI: Core revision 20230628 Dec 13 01:25:53.371644 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:53.371651 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:53.371658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:53.371665 kernel: landlock: Up and running. Dec 13 01:25:53.371672 kernel: SELinux: Initializing. Dec 13 01:25:53.371679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.371688 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.371695 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:53.371702 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:53.371709 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:53.371717 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:53.371723 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:53.371731 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:53.371744 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:53.371752 kernel: Remapping and enabling EFI services. Dec 13 01:25:53.371759 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:53.371767 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:53.371776 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:53.371783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:53.371791 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:53.371798 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:53.371805 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:53.371812 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:53.371821 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:53.371829 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:53.371836 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:53.371844 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:53.371851 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:53.371858 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:53.371866 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:53.371873 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:53.371881 kernel: devtmpfs: initialized Dec 13 01:25:53.371890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:53.371897 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:53.371904 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:53.371912 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:53.371919 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:53.371927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:53.371934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:53.371941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:53.371950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:53.371958 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:53.371965 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:53.371972 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:53.371980 kernel: cpuidle: using governor menu Dec 13 01:25:53.371987 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:53.371994 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:53.372002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:53.372009 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:53.372018 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:53.372025 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:53.372033 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:53.372040 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:53.372048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:53.372055 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:53.372062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:53.372069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:53.372077 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:53.372086 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:53.372093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:53.372100 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:53.372107 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:53.372115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:53.372122 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:53.372129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:53.372136 kernel: ACPI: Interpreter enabled Dec 13 01:25:53.372154 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:53.372163 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:53.372172 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:53.372179 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:53.372187 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:53.372194 kernel: iommu: Default domain type: Translated Dec 13 01:25:53.372201 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:53.372209 kernel: efivars: Registered efivars operations Dec 13 01:25:53.372216 kernel: vgaarb: loaded Dec 13 01:25:53.372223 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:53.372230 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:53.372240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:53.372247 kernel: pnp: PnP ACPI init Dec 13 01:25:53.372254 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:53.372262 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:53.372269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:53.372277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:53.372284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:53.372292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:53.372301 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:53.372308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:53.372316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.372323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.372330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:53.372338 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:53.372345 kernel: kvm [1]: HYP mode not available Dec 13 01:25:53.372352 kernel: Initialise system trusted keyrings Dec 13 01:25:53.372359 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:53.372368 kernel: Key type asymmetric registered Dec 13 01:25:53.372375 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:53.372382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:53.372390 kernel: io scheduler mq-deadline registered Dec 13 01:25:53.372397 kernel: io scheduler kyber registered Dec 13 01:25:53.372405 kernel: io scheduler bfq registered Dec 13 01:25:53.372412 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:53.372419 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:53.372427 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:53.372434 kernel: nicpf, ver 1.0 Dec 13 01:25:53.372443 kernel: nicvf, ver 1.0 Dec 13 01:25:53.372594 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:53.372668 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:52 UTC (1734053152) Dec 13 01:25:53.372679 kernel: efifb: probing for efifb Dec 13 01:25:53.372686 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:53.372694 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:53.372701 kernel: efifb: scrolling: redraw Dec 13 01:25:53.372711 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:53.372718 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:53.372725 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:53.372733 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:53.372740 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:53.372747 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:53.372755 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:53.372762 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:53.372769 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:53.372778 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:53.372785 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:53.372793 kernel: Segment Routing with IPv6 Dec 13 01:25:53.372800 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:53.372807 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:53.372815 kernel: Key type dns_resolver registered Dec 13 01:25:53.372822 kernel: registered taskstats version 1 Dec 13 01:25:53.372829 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:53.372837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:53.372844 kernel: Key type .fscrypt registered Dec 13 01:25:53.372853 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:53.372861 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:53.372868 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:53.372875 kernel: ima: No architecture policies found Dec 13 01:25:53.372883 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:53.372891 kernel: clk: Disabling unused clocks Dec 13 01:25:53.372898 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:53.372905 kernel: Run /init as init process Dec 13 01:25:53.372914 kernel: with arguments: Dec 13 01:25:53.372921 kernel: /init Dec 13 01:25:53.372928 kernel: with environment: Dec 13 01:25:53.372935 kernel: HOME=/ Dec 13 01:25:53.372943 kernel: TERM=linux Dec 13 01:25:53.372950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:53.372959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:53.372969 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:53.372979 systemd[1]: Detected architecture arm64. Dec 13 01:25:53.372987 systemd[1]: Running in initrd. Dec 13 01:25:53.372995 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:53.373003 systemd[1]: Hostname set to . Dec 13 01:25:53.373011 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:53.373019 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:53.373027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:53.373035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:53.373044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:53.373053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:53.373061 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:53.373069 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:53.373078 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:53.373087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:53.373095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:53.373105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:53.373113 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:53.373121 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:53.373128 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:53.373136 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:53.375184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:53.375206 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:53.375215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:53.375229 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:53.375237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:53.375245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:53.375253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:53.375261 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:53.375269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:53.375277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:53.375285 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:53.375293 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:53.375303 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:53.375311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:53.375346 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:53.375367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:53.375379 systemd-journald[217]: Journal started Dec 13 01:25:53.375398 systemd-journald[217]: Runtime Journal (/run/log/journal/baaeb9f79c7141ef96966ac0c3832c94) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:53.387051 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:53.408635 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:53.409237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:53.440453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:53.440481 kernel: Bridge firewalling registered Dec 13 01:25:53.432578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:53.437326 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:53.448347 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:53.461161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:53.471179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:53.495458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:53.504341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:53.519308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:53.545745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:53.561357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:53.573170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:53.586438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:53.599098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:53.626448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:53.636354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:53.658472 dracut-cmdline[249]: dracut-dracut-053 Dec 13 01:25:53.658472 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:53.699229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:53.716876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:53.730542 systemd-resolved[252]: Positive Trust Anchors: Dec 13 01:25:53.730552 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:53.730584 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:53.732800 systemd-resolved[252]: Defaulting to hostname 'linux'. Dec 13 01:25:53.734540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:53.742397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:53.849194 kernel: SCSI subsystem initialized Dec 13 01:25:53.859183 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:53.870174 kernel: iscsi: registered transport (tcp) Dec 13 01:25:53.888107 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:53.888133 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:53.921584 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:53.937479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:53.970098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:53.970150 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:53.976826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:54.024503 kernel: raid6: neonx8 gen() 15747 MB/s Dec 13 01:25:54.043156 kernel: raid6: neonx4 gen() 15423 MB/s Dec 13 01:25:54.063158 kernel: raid6: neonx2 gen() 13267 MB/s Dec 13 01:25:54.084154 kernel: raid6: neonx1 gen() 10482 MB/s Dec 13 01:25:54.104153 kernel: raid6: int64x8 gen() 6960 MB/s Dec 13 01:25:54.124152 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:25:54.145154 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 01:25:54.168513 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:25:54.168533 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s Dec 13 01:25:54.192575 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Dec 13 01:25:54.192592 kernel: raid6: using neon recovery algorithm Dec 13 01:25:54.205445 kernel: xor: measuring software checksum speed Dec 13 01:25:54.205465 kernel: 8regs : 19793 MB/sec Dec 13 01:25:54.209183 kernel: 32regs : 19580 MB/sec Dec 13 01:25:54.212826 kernel: arm64_neon : 26892 MB/sec Dec 13 01:25:54.217299 kernel: xor: using function: arm64_neon (26892 MB/sec) Dec 13 01:25:54.268165 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:54.278188 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:54.299348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:54.323604 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:25:54.330043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:54.354301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:54.367938 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:25:54.394203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:54.409383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:54.452698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:54.478392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:54.508774 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:54.520323 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:54.534850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:54.550407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:54.571391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:54.588181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:54.618359 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:54.618384 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:54.618394 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:54.618404 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:54.609778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:54.662992 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:54.663181 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:54.663193 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:54.663203 kernel: scsi host0: storvsc_host_t Dec 13 01:25:54.609932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:54.715821 kernel: scsi host1: storvsc_host_t Dec 13 01:25:54.716014 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:54.716028 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:54.716127 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:54.716137 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:54.716247 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:54.640228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:54.738400 kernel: PTP clock support registered Dec 13 01:25:54.701991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:54.702262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:54.768727 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:54.724298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.058541 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:55.058565 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:55.058740 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:55.058755 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:55.058765 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:55.058774 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:55.058783 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:54.779204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.101686 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:55.142410 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:55.142565 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:55.142656 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:55.142742 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:55.142826 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: VF slot 1 added Dec 13 01:25:55.142928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:55.142939 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:55.050168 systemd-resolved[252]: Clock change detected. Flushing caches. Dec 13 01:25:55.105003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:55.198127 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:55.198154 kernel: hv_pci 23cb5dac-b61e-4d4f-9cf3-6e8f30e46176: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:55.311110 kernel: hv_pci 23cb5dac-b61e-4d4f-9cf3-6e8f30e46176: PCI host bridge to bus b61e:00 Dec 13 01:25:55.311251 kernel: pci_bus b61e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:55.311358 kernel: pci_bus b61e:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:55.311453 kernel: pci b61e:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:55.311569 kernel: pci b61e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:55.311671 kernel: pci b61e:00:02.0: enabling Extended Tags Dec 13 01:25:55.311766 kernel: pci b61e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b61e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:55.311866 kernel: pci_bus b61e:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:55.311953 kernel: pci b61e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:55.105094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:55.141580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.166289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:55.202637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:55.248098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:55.367181 kernel: mlx5_core b61e:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:55.599843 kernel: mlx5_core b61e:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:55.599991 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: VF registering: eth1 Dec 13 01:25:55.600110 kernel: mlx5_core b61e:00:02.0 eth1: joined to eth0 Dec 13 01:25:55.600202 kernel: mlx5_core b61e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:55.605008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:55.626583 kernel: mlx5_core b61e:00:02.0 enP46622s1: renamed from eth1 Dec 13 01:25:55.710668 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (491) Dec 13 01:25:55.715742 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:55.748212 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (483) Dec 13 01:25:55.737522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:55.751516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:55.775433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:55.795666 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:55.822467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:55.831444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:56.833087 disk-uuid[605]: The operation has completed successfully. Dec 13 01:25:56.839772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:56.901906 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:56.902003 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:56.933045 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:56.949708 sh[691]: Success Dec 13 01:25:56.978482 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:57.361896 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:57.383555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:57.394060 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:57.426638 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:57.426699 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.437760 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:57.445195 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:57.452543 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:57.754249 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:57.760763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:57.780695 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:57.811778 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.811837 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.806615 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:57.834286 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:57.856713 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:57.875405 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:57.881278 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.914213 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:57.930704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:57.960531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:57.982573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:58.018675 systemd-networkd[875]: lo: Link UP Dec 13 01:25:58.018686 systemd-networkd[875]: lo: Gained carrier Dec 13 01:25:58.020896 systemd-networkd[875]: Enumeration completed Dec 13 01:25:58.021018 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:58.034575 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:58.034579 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:58.035018 systemd[1]: Reached target network.target - Network. Dec 13 01:25:58.135458 kernel: mlx5_core b61e:00:02.0 enP46622s1: Link up Dec 13 01:25:58.177449 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: Data path switched to VF: enP46622s1 Dec 13 01:25:58.178595 systemd-networkd[875]: enP46622s1: Link UP Dec 13 01:25:58.178688 systemd-networkd[875]: eth0: Link UP Dec 13 01:25:58.178805 systemd-networkd[875]: eth0: Gained carrier Dec 13 01:25:58.178814 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:58.207743 systemd-networkd[875]: enP46622s1: Gained carrier Dec 13 01:25:58.222478 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:58.829932 ignition[851]: Ignition 2.19.0 Dec 13 01:25:58.829942 ignition[851]: Stage: fetch-offline Dec 13 01:25:58.832995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:58.829980 ignition[851]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:58.829989 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:58.830085 ignition[851]: parsed url from cmdline: "" Dec 13 01:25:58.861604 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:58.830088 ignition[851]: no config URL provided Dec 13 01:25:58.830098 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:58.830106 ignition[851]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:58.830111 ignition[851]: failed to fetch config: resource requires networking Dec 13 01:25:58.830280 ignition[851]: Ignition finished successfully Dec 13 01:25:58.887648 ignition[886]: Ignition 2.19.0 Dec 13 01:25:58.887655 ignition[886]: Stage: fetch Dec 13 01:25:58.887864 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:58.887874 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:58.887981 ignition[886]: parsed url from cmdline: "" Dec 13 01:25:58.887987 ignition[886]: no config URL provided Dec 13 01:25:58.887992 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:58.887998 ignition[886]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:58.888022 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:59.001809 ignition[886]: GET result: OK Dec 13 01:25:59.001872 ignition[886]: config has been read from IMDS userdata Dec 13 01:25:59.001912 ignition[886]: parsing config with SHA512: ed2d665e7bf631ee08b6fb552a53e1ac29d9ffae445702004a817e3ad2c98f072976afd7bc066dfb57f0c034d1799ad46b3ae31e7b4ea52a6d4fc1e5370181a6 Dec 13 01:25:59.006566 unknown[886]: fetched base config from "system" Dec 13 01:25:59.007122 ignition[886]: fetch: fetch complete Dec 13 01:25:59.006573 unknown[886]: fetched base config from "system" Dec 13 01:25:59.007128 ignition[886]: fetch: fetch passed Dec 13 01:25:59.006651 unknown[886]: fetched user config from "azure" Dec 13 01:25:59.007198 ignition[886]: Ignition finished successfully Dec 13 01:25:59.013920 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:59.067213 ignition[892]: Ignition 2.19.0 Dec 13 01:25:59.038656 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:59.067219 ignition[892]: Stage: kargs Dec 13 01:25:59.078482 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:59.067443 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.067453 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.101755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:59.069014 ignition[892]: kargs: kargs passed Dec 13 01:25:59.138612 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:59.069074 ignition[892]: Ignition finished successfully Dec 13 01:25:59.149608 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:59.135846 ignition[898]: Ignition 2.19.0 Dec 13 01:25:59.161009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:59.135854 ignition[898]: Stage: disks Dec 13 01:25:59.175923 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:59.136115 ignition[898]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.188162 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:59.136126 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.202231 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:59.137336 ignition[898]: disks: disks passed Dec 13 01:25:59.238714 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:59.137418 ignition[898]: Ignition finished successfully Dec 13 01:25:59.352509 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:59.365564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:59.386675 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:59.449632 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:59.450171 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:59.456326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:59.500507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:59.516577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:59.531548 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:59.565506 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (918) Dec 13 01:25:59.565531 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:59.549275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:59.599048 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:59.549312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:59.615149 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:59.605768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:59.633820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:59.654239 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:59.659630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:59.764588 systemd-networkd[875]: enP46622s1: Gained IPv6LL Dec 13 01:25:59.828614 systemd-networkd[875]: eth0: Gained IPv6LL Dec 13 01:26:00.130085 coreos-metadata[920]: Dec 13 01:26:00.130 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:00.147247 coreos-metadata[920]: Dec 13 01:26:00.147 INFO Fetch successful Dec 13 01:26:00.147247 coreos-metadata[920]: Dec 13 01:26:00.147 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:00.168606 coreos-metadata[920]: Dec 13 01:26:00.161 INFO Fetch successful Dec 13 01:26:00.176017 coreos-metadata[920]: Dec 13 01:26:00.175 INFO wrote hostname ci-4081.2.1-a-d903163327 to /sysroot/etc/hostname Dec 13 01:26:00.193535 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:00.332325 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:00.343394 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:00.355233 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:00.378995 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:01.146213 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:01.164655 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:01.173716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:01.205872 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:01.199994 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:01.223646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:01.238789 ignition[1036]: INFO : Ignition 2.19.0 Dec 13 01:26:01.238789 ignition[1036]: INFO : Stage: mount Dec 13 01:26:01.247541 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:01.247541 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:01.247541 ignition[1036]: INFO : mount: mount passed Dec 13 01:26:01.247541 ignition[1036]: INFO : Ignition finished successfully Dec 13 01:26:01.248489 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:01.276617 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:01.299303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:01.331624 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Dec 13 01:26:01.348281 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:01.348310 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:01.353219 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:01.361452 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:01.362661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:01.391661 ignition[1064]: INFO : Ignition 2.19.0 Dec 13 01:26:01.391661 ignition[1064]: INFO : Stage: files Dec 13 01:26:01.401709 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:01.401709 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:01.401709 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:01.424755 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:01.424755 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:01.473060 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:01.482525 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:01.482525 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:01.473487 unknown[1064]: wrote ssh authorized keys file for user: core Dec 13 01:26:01.511987 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:01.511987 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:01.813157 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:01.900494 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:26:02.363304 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:26:02.715766 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:02.715766 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:26:02.740684 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: files passed Dec 13 01:26:02.802188 ignition[1064]: INFO : Ignition finished successfully Dec 13 01:26:02.767750 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:02.804223 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:02.823628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:02.855849 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:02.855958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:02.910214 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.910214 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.934178 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.921990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:02.947354 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:02.991706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:03.030828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:03.030969 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:03.043863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:03.058631 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:03.072886 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:03.092960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:03.118053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:03.136696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:03.157365 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:03.157483 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:03.173112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:03.190445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:03.212524 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:03.229039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:03.229126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:03.252052 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:03.260355 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:03.275067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:03.290483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:03.304859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:03.320707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:03.334373 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:03.349691 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:03.363953 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:03.379271 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:03.391923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:03.391998 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:03.410275 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:03.424747 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:03.439838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:03.447422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:03.456535 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:03.456609 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:03.478317 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:03.478386 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:03.489384 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:03.489445 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:03.508801 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:26:03.586264 ignition[1118]: INFO : Ignition 2.19.0 Dec 13 01:26:03.586264 ignition[1118]: INFO : Stage: umount Dec 13 01:26:03.586264 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:03.586264 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:03.586264 ignition[1118]: INFO : umount: umount passed Dec 13 01:26:03.586264 ignition[1118]: INFO : Ignition finished successfully Dec 13 01:26:03.508867 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:03.550572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:03.579502 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:03.592403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:03.592503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:03.612729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:03.612792 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:03.628014 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:03.628103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:03.637684 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:03.638052 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:03.638096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:03.646903 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:03.646964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:03.659115 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:03.659168 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:03.674589 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:03.688721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:03.688797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:03.703479 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:03.717394 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:03.725721 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:03.735759 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:03.750572 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:03.763412 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:03.763479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:03.776128 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:03.776172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:03.791039 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:03.791108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:03.805492 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:03.805543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:03.819832 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:03.834723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:03.848478 systemd-networkd[875]: eth0: DHCPv6 lease lost Dec 13 01:26:03.857673 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:03.857897 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:04.161532 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: Data path switched from VF: enP46622s1 Dec 13 01:26:03.871892 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:03.872000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:03.889021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:03.889080 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:03.942591 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:03.956808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:03.956896 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:03.973046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:03.973100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:03.989407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:03.989490 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:03.997939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:03.997991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:04.013001 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:04.032958 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:04.033455 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:04.078902 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:04.079053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:04.097710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:04.097793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:04.113739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:04.113788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:04.127635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:04.127696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:04.161446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:04.161519 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:04.175924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:04.175989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:04.198953 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:04.199042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:04.235797 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:04.458471 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:04.252840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:04.252920 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:04.267082 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:04.267127 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:04.281057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:04.281111 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:04.296604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:04.296655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:04.311310 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:04.311443 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:04.324561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:04.324661 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:04.339512 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:04.375725 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:04.391670 systemd[1]: Switching root. Dec 13 01:26:04.551888 systemd-journald[217]: Journal stopped Dec 13 01:25:53.368845 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:53.368868 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:53.368876 kernel: KASLR enabled Dec 13 01:25:53.368882 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:53.368890 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:53.368895 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:53.368902 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:53.368908 kernel: random: crng init done Dec 13 01:25:53.368915 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:53.368921 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:53.368927 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368933 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368940 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:53.368947 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368954 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368960 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368967 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368975 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368982 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.368988 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:53.368995 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:53.369001 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:53.369008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:53.369014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:53.369020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:53.369027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:53.369033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:53.369040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:53.369048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:53.369054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:53.369060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:53.369067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:53.369073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:53.369079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:53.369085 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:53.369092 kernel: Zone ranges: Dec 13 01:25:53.369098 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:53.369104 kernel: DMA32 empty Dec 13 01:25:53.369110 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:53.369117 kernel: Movable zone start for each node Dec 13 01:25:53.369127 kernel: Early memory node ranges Dec 13 01:25:53.369134 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:53.369140 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:53.371197 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:53.371208 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:53.371222 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:53.371229 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:53.371235 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:53.371243 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:53.371250 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:53.371257 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:53.371264 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:53.371270 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:53.371277 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:53.371284 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:53.371291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:53.371298 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:53.371306 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:53.371313 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:53.371320 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:53.371327 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:53.371334 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:53.371341 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:53.371347 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:53.371354 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:53.371361 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:53.371368 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:53.371374 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:53.371383 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:53.371390 kernel: alternatives: applying boot alternatives Dec 13 01:25:53.371398 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:53.371406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:53.371413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:53.371419 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:53.371426 kernel: Fallback order for Node 0: 0 Dec 13 01:25:53.371433 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:53.371440 kernel: Policy zone: Normal Dec 13 01:25:53.371446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:53.371453 kernel: software IO TLB: area num 2. Dec 13 01:25:53.371462 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:53.371469 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:53.371476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:53.371482 kernel: trace event string verifier disabled Dec 13 01:25:53.371489 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:53.371497 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:53.371504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:53.371511 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:53.371518 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:53.371525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:53.371531 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:53.371539 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:53.371546 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:53.371553 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:53.371559 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:53.371566 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:53.371572 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:53.371579 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:53.371586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:53.371593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:53.371600 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:53.371607 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:53.371614 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:53.371623 kernel: Console: colour dummy device 80x25 Dec 13 01:25:53.371630 kernel: printk: console [tty1] enabled Dec 13 01:25:53.371637 kernel: ACPI: Core revision 20230628 Dec 13 01:25:53.371644 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:53.371651 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:53.371658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:53.371665 kernel: landlock: Up and running. Dec 13 01:25:53.371672 kernel: SELinux: Initializing. Dec 13 01:25:53.371679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.371688 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.371695 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:53.371702 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:53.371709 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:53.371717 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:53.371723 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:53.371731 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:53.371744 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:53.371752 kernel: Remapping and enabling EFI services. Dec 13 01:25:53.371759 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:53.371767 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:53.371776 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:53.371783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:53.371791 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:53.371798 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:53.371805 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:53.371812 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:53.371821 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:53.371829 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:53.371836 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:53.371844 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:53.371851 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:53.371858 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:53.371866 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:53.371873 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:53.371881 kernel: devtmpfs: initialized Dec 13 01:25:53.371890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:53.371897 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:53.371904 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:53.371912 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:53.371919 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:53.371927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:53.371934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:53.371941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:53.371950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:53.371958 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:53.371965 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:53.371972 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:53.371980 kernel: cpuidle: using governor menu Dec 13 01:25:53.371987 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:53.371994 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:53.372002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:53.372009 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:53.372018 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:53.372025 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:53.372033 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:53.372040 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:53.372048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:53.372055 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:53.372062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:53.372069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:53.372077 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:53.372086 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:53.372093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:53.372100 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:53.372107 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:53.372115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:53.372122 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:53.372129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:53.372136 kernel: ACPI: Interpreter enabled Dec 13 01:25:53.372154 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:53.372163 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:53.372172 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:53.372179 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:53.372187 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:53.372194 kernel: iommu: Default domain type: Translated Dec 13 01:25:53.372201 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:53.372209 kernel: efivars: Registered efivars operations Dec 13 01:25:53.372216 kernel: vgaarb: loaded Dec 13 01:25:53.372223 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:53.372230 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:53.372240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:53.372247 kernel: pnp: PnP ACPI init Dec 13 01:25:53.372254 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:53.372262 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:53.372269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:53.372277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:53.372284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:53.372292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:53.372301 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:53.372308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:53.372316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.372323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:53.372330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:53.372338 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:53.372345 kernel: kvm [1]: HYP mode not available Dec 13 01:25:53.372352 kernel: Initialise system trusted keyrings Dec 13 01:25:53.372359 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:53.372368 kernel: Key type asymmetric registered Dec 13 01:25:53.372375 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:53.372382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:53.372390 kernel: io scheduler mq-deadline registered Dec 13 01:25:53.372397 kernel: io scheduler kyber registered Dec 13 01:25:53.372405 kernel: io scheduler bfq registered Dec 13 01:25:53.372412 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:53.372419 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:53.372427 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:53.372434 kernel: nicpf, ver 1.0 Dec 13 01:25:53.372443 kernel: nicvf, ver 1.0 Dec 13 01:25:53.372594 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:53.372668 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:52 UTC (1734053152) Dec 13 01:25:53.372679 kernel: efifb: probing for efifb Dec 13 01:25:53.372686 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:53.372694 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:53.372701 kernel: efifb: scrolling: redraw Dec 13 01:25:53.372711 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:53.372718 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:53.372725 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:53.372733 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:53.372740 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:53.372747 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:53.372755 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:53.372762 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:53.372769 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:53.372778 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:53.372785 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:53.372793 kernel: Segment Routing with IPv6 Dec 13 01:25:53.372800 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:53.372807 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:53.372815 kernel: Key type dns_resolver registered Dec 13 01:25:53.372822 kernel: registered taskstats version 1 Dec 13 01:25:53.372829 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:53.372837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:53.372844 kernel: Key type .fscrypt registered Dec 13 01:25:53.372853 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:53.372861 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:53.372868 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:53.372875 kernel: ima: No architecture policies found Dec 13 01:25:53.372883 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:53.372891 kernel: clk: Disabling unused clocks Dec 13 01:25:53.372898 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:53.372905 kernel: Run /init as init process Dec 13 01:25:53.372914 kernel: with arguments: Dec 13 01:25:53.372921 kernel: /init Dec 13 01:25:53.372928 kernel: with environment: Dec 13 01:25:53.372935 kernel: HOME=/ Dec 13 01:25:53.372943 kernel: TERM=linux Dec 13 01:25:53.372950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:53.372959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:53.372969 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:53.372979 systemd[1]: Detected architecture arm64. Dec 13 01:25:53.372987 systemd[1]: Running in initrd. Dec 13 01:25:53.372995 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:53.373003 systemd[1]: Hostname set to . Dec 13 01:25:53.373011 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:53.373019 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:53.373027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:53.373035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:53.373044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:53.373053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:53.373061 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:53.373069 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:53.373078 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:53.373087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:53.373095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:53.373105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:53.373113 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:53.373121 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:53.373128 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:53.373136 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:53.375184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:53.375206 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:53.375215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:53.375229 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:53.375237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:53.375245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:53.375253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:53.375261 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:53.375269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:53.375277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:53.375285 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:53.375293 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:53.375303 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:53.375311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:53.375346 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:53.375367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:53.375379 systemd-journald[217]: Journal started Dec 13 01:25:53.375398 systemd-journald[217]: Runtime Journal (/run/log/journal/baaeb9f79c7141ef96966ac0c3832c94) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:53.387051 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:53.408635 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:53.409237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:53.440453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:53.440481 kernel: Bridge firewalling registered Dec 13 01:25:53.432578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:53.437326 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:53.448347 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:53.461161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:53.471179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:53.495458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:53.504341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:53.519308 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:53.545745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:53.561357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:53.573170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:53.586438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:53.599098 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:53.626448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:53.636354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:53.658472 dracut-cmdline[249]: dracut-dracut-053 Dec 13 01:25:53.658472 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:53.699229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:53.716876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:53.730542 systemd-resolved[252]: Positive Trust Anchors: Dec 13 01:25:53.730552 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:53.730584 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:53.732800 systemd-resolved[252]: Defaulting to hostname 'linux'. Dec 13 01:25:53.734540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:53.742397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:53.849194 kernel: SCSI subsystem initialized Dec 13 01:25:53.859183 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:53.870174 kernel: iscsi: registered transport (tcp) Dec 13 01:25:53.888107 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:53.888133 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:53.921584 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:53.937479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:53.970098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:53.970150 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:53.976826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:54.024503 kernel: raid6: neonx8 gen() 15747 MB/s Dec 13 01:25:54.043156 kernel: raid6: neonx4 gen() 15423 MB/s Dec 13 01:25:54.063158 kernel: raid6: neonx2 gen() 13267 MB/s Dec 13 01:25:54.084154 kernel: raid6: neonx1 gen() 10482 MB/s Dec 13 01:25:54.104153 kernel: raid6: int64x8 gen() 6960 MB/s Dec 13 01:25:54.124152 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:25:54.145154 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 01:25:54.168513 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:25:54.168533 kernel: raid6: using algorithm neonx8 gen() 15747 MB/s Dec 13 01:25:54.192575 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Dec 13 01:25:54.192592 kernel: raid6: using neon recovery algorithm Dec 13 01:25:54.205445 kernel: xor: measuring software checksum speed Dec 13 01:25:54.205465 kernel: 8regs : 19793 MB/sec Dec 13 01:25:54.209183 kernel: 32regs : 19580 MB/sec Dec 13 01:25:54.212826 kernel: arm64_neon : 26892 MB/sec Dec 13 01:25:54.217299 kernel: xor: using function: arm64_neon (26892 MB/sec) Dec 13 01:25:54.268165 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:54.278188 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:54.299348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:54.323604 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 13 01:25:54.330043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:54.354301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:54.367938 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 13 01:25:54.394203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:54.409383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:54.452698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:54.478392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:54.508774 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:54.520323 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:54.534850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:54.550407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:54.571391 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:54.588181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:54.618359 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:54.618384 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:54.618394 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:54.618404 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:54.609778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:54.662992 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:54.663181 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:54.663193 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:54.663203 kernel: scsi host0: storvsc_host_t Dec 13 01:25:54.609932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:54.715821 kernel: scsi host1: storvsc_host_t Dec 13 01:25:54.716014 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:54.716028 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:54.716127 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:54.716137 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:54.716247 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:54.640228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:54.738400 kernel: PTP clock support registered Dec 13 01:25:54.701991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:54.702262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:54.768727 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:54.724298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.058541 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:55.058565 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:55.058740 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:55.058755 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:55.058765 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:55.058774 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:55.058783 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:54.779204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.101686 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:55.142410 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:55.142565 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:55.142656 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:55.142742 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:55.142826 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: VF slot 1 added Dec 13 01:25:55.142928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:55.142939 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:55.050168 systemd-resolved[252]: Clock change detected. Flushing caches. Dec 13 01:25:55.105003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:55.198127 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:55.198154 kernel: hv_pci 23cb5dac-b61e-4d4f-9cf3-6e8f30e46176: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:55.311110 kernel: hv_pci 23cb5dac-b61e-4d4f-9cf3-6e8f30e46176: PCI host bridge to bus b61e:00 Dec 13 01:25:55.311251 kernel: pci_bus b61e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:55.311358 kernel: pci_bus b61e:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:55.311453 kernel: pci b61e:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:55.311569 kernel: pci b61e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:55.311671 kernel: pci b61e:00:02.0: enabling Extended Tags Dec 13 01:25:55.311766 kernel: pci b61e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b61e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:55.311866 kernel: pci_bus b61e:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:55.311953 kernel: pci b61e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:55.105094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:55.141580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:55.166289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:55.202637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:55.248098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:55.367181 kernel: mlx5_core b61e:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:55.599843 kernel: mlx5_core b61e:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:55.599991 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: VF registering: eth1 Dec 13 01:25:55.600110 kernel: mlx5_core b61e:00:02.0 eth1: joined to eth0 Dec 13 01:25:55.600202 kernel: mlx5_core b61e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:55.605008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:55.626583 kernel: mlx5_core b61e:00:02.0 enP46622s1: renamed from eth1 Dec 13 01:25:55.710668 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (491) Dec 13 01:25:55.715742 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:55.748212 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (483) Dec 13 01:25:55.737522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:55.751516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:55.775433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:55.795666 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:55.822467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:55.831444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:56.833087 disk-uuid[605]: The operation has completed successfully. Dec 13 01:25:56.839772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:56.901906 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:56.902003 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:56.933045 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:56.949708 sh[691]: Success Dec 13 01:25:56.978482 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:57.361896 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:57.383555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:57.394060 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:57.426638 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:57.426699 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.437760 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:57.445195 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:57.452543 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:57.754249 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:57.760763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:57.780695 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:57.811778 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.811837 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:57.806615 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:57.834286 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:57.856713 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:57.875405 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:57.881278 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:57.914213 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:57.930704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:57.960531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:57.982573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:58.018675 systemd-networkd[875]: lo: Link UP Dec 13 01:25:58.018686 systemd-networkd[875]: lo: Gained carrier Dec 13 01:25:58.020896 systemd-networkd[875]: Enumeration completed Dec 13 01:25:58.021018 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:58.034575 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:58.034579 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:58.035018 systemd[1]: Reached target network.target - Network. Dec 13 01:25:58.135458 kernel: mlx5_core b61e:00:02.0 enP46622s1: Link up Dec 13 01:25:58.177449 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: Data path switched to VF: enP46622s1 Dec 13 01:25:58.178595 systemd-networkd[875]: enP46622s1: Link UP Dec 13 01:25:58.178688 systemd-networkd[875]: eth0: Link UP Dec 13 01:25:58.178805 systemd-networkd[875]: eth0: Gained carrier Dec 13 01:25:58.178814 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:58.207743 systemd-networkd[875]: enP46622s1: Gained carrier Dec 13 01:25:58.222478 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:58.829932 ignition[851]: Ignition 2.19.0 Dec 13 01:25:58.829942 ignition[851]: Stage: fetch-offline Dec 13 01:25:58.832995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:58.829980 ignition[851]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:58.829989 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:58.830085 ignition[851]: parsed url from cmdline: "" Dec 13 01:25:58.861604 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:58.830088 ignition[851]: no config URL provided Dec 13 01:25:58.830098 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:58.830106 ignition[851]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:58.830111 ignition[851]: failed to fetch config: resource requires networking Dec 13 01:25:58.830280 ignition[851]: Ignition finished successfully Dec 13 01:25:58.887648 ignition[886]: Ignition 2.19.0 Dec 13 01:25:58.887655 ignition[886]: Stage: fetch Dec 13 01:25:58.887864 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:58.887874 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:58.887981 ignition[886]: parsed url from cmdline: "" Dec 13 01:25:58.887987 ignition[886]: no config URL provided Dec 13 01:25:58.887992 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:58.887998 ignition[886]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:58.888022 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:59.001809 ignition[886]: GET result: OK Dec 13 01:25:59.001872 ignition[886]: config has been read from IMDS userdata Dec 13 01:25:59.001912 ignition[886]: parsing config with SHA512: ed2d665e7bf631ee08b6fb552a53e1ac29d9ffae445702004a817e3ad2c98f072976afd7bc066dfb57f0c034d1799ad46b3ae31e7b4ea52a6d4fc1e5370181a6 Dec 13 01:25:59.006566 unknown[886]: fetched base config from "system" Dec 13 01:25:59.007122 ignition[886]: fetch: fetch complete Dec 13 01:25:59.006573 unknown[886]: fetched base config from "system" Dec 13 01:25:59.007128 ignition[886]: fetch: fetch passed Dec 13 01:25:59.006651 unknown[886]: fetched user config from "azure" Dec 13 01:25:59.007198 ignition[886]: Ignition finished successfully Dec 13 01:25:59.013920 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:59.067213 ignition[892]: Ignition 2.19.0 Dec 13 01:25:59.038656 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:59.067219 ignition[892]: Stage: kargs Dec 13 01:25:59.078482 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:59.067443 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.067453 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.101755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:59.069014 ignition[892]: kargs: kargs passed Dec 13 01:25:59.138612 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:59.069074 ignition[892]: Ignition finished successfully Dec 13 01:25:59.149608 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:59.135846 ignition[898]: Ignition 2.19.0 Dec 13 01:25:59.161009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:59.135854 ignition[898]: Stage: disks Dec 13 01:25:59.175923 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:59.136115 ignition[898]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:59.188162 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:59.136126 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:59.202231 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:59.137336 ignition[898]: disks: disks passed Dec 13 01:25:59.238714 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:59.137418 ignition[898]: Ignition finished successfully Dec 13 01:25:59.352509 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:59.365564 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:59.386675 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:59.449632 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:59.450171 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:59.456326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:59.500507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:59.516577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:59.531548 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:59.565506 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (918) Dec 13 01:25:59.565531 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:59.549275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:59.599048 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:59.549312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:59.615149 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:59.605768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:59.633820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:59.654239 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:59.659630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:59.764588 systemd-networkd[875]: enP46622s1: Gained IPv6LL Dec 13 01:25:59.828614 systemd-networkd[875]: eth0: Gained IPv6LL Dec 13 01:26:00.130085 coreos-metadata[920]: Dec 13 01:26:00.130 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:00.147247 coreos-metadata[920]: Dec 13 01:26:00.147 INFO Fetch successful Dec 13 01:26:00.147247 coreos-metadata[920]: Dec 13 01:26:00.147 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:00.168606 coreos-metadata[920]: Dec 13 01:26:00.161 INFO Fetch successful Dec 13 01:26:00.176017 coreos-metadata[920]: Dec 13 01:26:00.175 INFO wrote hostname ci-4081.2.1-a-d903163327 to /sysroot/etc/hostname Dec 13 01:26:00.193535 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:00.332325 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:26:00.343394 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:26:00.355233 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:26:00.378995 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:26:01.146213 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:01.164655 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:26:01.173716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:26:01.205872 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:01.199994 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:26:01.223646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:26:01.238789 ignition[1036]: INFO : Ignition 2.19.0 Dec 13 01:26:01.238789 ignition[1036]: INFO : Stage: mount Dec 13 01:26:01.247541 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:01.247541 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:01.247541 ignition[1036]: INFO : mount: mount passed Dec 13 01:26:01.247541 ignition[1036]: INFO : Ignition finished successfully Dec 13 01:26:01.248489 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:26:01.276617 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:26:01.299303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:26:01.331624 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Dec 13 01:26:01.348281 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:26:01.348310 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:26:01.353219 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:26:01.361452 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:26:01.362661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:26:01.391661 ignition[1064]: INFO : Ignition 2.19.0 Dec 13 01:26:01.391661 ignition[1064]: INFO : Stage: files Dec 13 01:26:01.401709 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:01.401709 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:01.401709 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:26:01.424755 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:26:01.424755 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:26:01.473060 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:26:01.482525 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:26:01.482525 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:26:01.473487 unknown[1064]: wrote ssh authorized keys file for user: core Dec 13 01:26:01.511987 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:01.511987 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:26:01.813157 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:26:01.900494 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:01.913706 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:26:02.363304 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:26:02.715766 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:26:02.715766 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:26:02.740684 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:02.753882 ignition[1064]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:26:02.802188 ignition[1064]: INFO : files: files passed Dec 13 01:26:02.802188 ignition[1064]: INFO : Ignition finished successfully Dec 13 01:26:02.767750 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:26:02.804223 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:26:02.823628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:26:02.855849 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:26:02.855958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:26:02.910214 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.910214 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.934178 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:26:02.921990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:02.947354 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:26:02.991706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:26:03.030828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:26:03.030969 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:26:03.043863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:26:03.058631 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:26:03.072886 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:26:03.092960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:26:03.118053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:03.136696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:26:03.157365 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:26:03.157483 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:26:03.173112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:03.190445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:03.212524 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:26:03.229039 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:26:03.229126 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:26:03.252052 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:26:03.260355 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:26:03.275067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:26:03.290483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:26:03.304859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:26:03.320707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:26:03.334373 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:26:03.349691 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:26:03.363953 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:26:03.379271 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:26:03.391923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:26:03.391998 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:26:03.410275 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:03.424747 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:03.439838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:26:03.447422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:03.456535 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:26:03.456609 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:26:03.478317 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:26:03.478386 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:26:03.489384 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:26:03.489445 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:26:03.508801 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:26:03.586264 ignition[1118]: INFO : Ignition 2.19.0 Dec 13 01:26:03.586264 ignition[1118]: INFO : Stage: umount Dec 13 01:26:03.586264 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:26:03.586264 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:26:03.586264 ignition[1118]: INFO : umount: umount passed Dec 13 01:26:03.586264 ignition[1118]: INFO : Ignition finished successfully Dec 13 01:26:03.508867 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:26:03.550572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:26:03.579502 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:26:03.592403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:26:03.592503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:03.612729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:26:03.612792 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:26:03.628014 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:26:03.628103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:26:03.637684 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:26:03.638052 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:26:03.638096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:26:03.646903 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:26:03.646964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:26:03.659115 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:26:03.659168 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:26:03.674589 systemd[1]: Stopped target network.target - Network. Dec 13 01:26:03.688721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:26:03.688797 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:26:03.703479 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:26:03.717394 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:26:03.725721 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:03.735759 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:26:03.750572 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:26:03.763412 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:26:03.763479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:26:03.776128 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:26:03.776172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:26:03.791039 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:26:03.791108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:26:03.805492 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:26:03.805543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:26:03.819832 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:26:03.834723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:26:03.848478 systemd-networkd[875]: eth0: DHCPv6 lease lost Dec 13 01:26:03.857673 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:26:03.857897 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:26:04.161532 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: Data path switched from VF: enP46622s1 Dec 13 01:26:03.871892 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:26:03.872000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:26:03.889021 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:26:03.889080 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:03.942591 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:26:03.956808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:26:03.956896 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:26:03.973046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:03.973100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:03.989407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:26:03.989490 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:03.997939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:26:03.997991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:04.013001 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:04.032958 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:26:04.033455 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:26:04.078902 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:26:04.079053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:04.097710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:26:04.097793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:04.113739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:26:04.113788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:04.127635 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:26:04.127696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:26:04.161446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:26:04.161519 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:26:04.175924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:26:04.175989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:26:04.198953 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:26:04.199042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:26:04.235797 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:26:04.458471 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:26:04.252840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:26:04.252920 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:04.267082 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:26:04.267127 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:04.281057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:26:04.281111 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:04.296604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:26:04.296655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:04.311310 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:26:04.311443 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:26:04.324561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:26:04.324661 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:26:04.339512 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:26:04.375725 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:26:04.391670 systemd[1]: Switching root. Dec 13 01:26:04.551888 systemd-journald[217]: Journal stopped Dec 13 01:26:09.686337 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:26:09.686362 kernel: SELinux: policy capability open_perms=1 Dec 13 01:26:09.686372 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:26:09.686380 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:26:09.686390 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:26:09.686397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:26:09.686406 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:26:09.686414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:26:09.686423 kernel: audit: type=1403 audit(1734053165.666:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:26:09.689496 systemd[1]: Successfully loaded SELinux policy in 174.992ms. Dec 13 01:26:09.689524 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.314ms. Dec 13 01:26:09.689536 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:26:09.689546 systemd[1]: Detected virtualization microsoft. Dec 13 01:26:09.689555 systemd[1]: Detected architecture arm64. Dec 13 01:26:09.689564 systemd[1]: Detected first boot. Dec 13 01:26:09.689576 systemd[1]: Hostname set to . Dec 13 01:26:09.689586 systemd[1]: Initializing machine ID from random generator. Dec 13 01:26:09.689595 zram_generator::config[1159]: No configuration found. Dec 13 01:26:09.689605 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:26:09.689614 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:26:09.689623 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:26:09.689633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:09.689644 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:26:09.689654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:26:09.689664 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:26:09.689673 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:26:09.689683 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:26:09.689692 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:26:09.689702 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:26:09.689713 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:26:09.689724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:26:09.689734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:26:09.689743 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:26:09.689753 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:26:09.689762 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:26:09.689771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:26:09.689781 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:26:09.689792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:26:09.689802 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:26:09.689811 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:26:09.689823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:26:09.689833 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:26:09.689842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:26:09.689852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:26:09.689861 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:26:09.689872 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:26:09.689881 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:26:09.689891 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:26:09.689901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:26:09.689910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:26:09.689920 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:26:09.689932 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:26:09.689942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:26:09.689952 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:26:09.689961 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:26:09.689971 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:26:09.689980 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:26:09.689990 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:26:09.690002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:26:09.690012 systemd[1]: Reached target machines.target - Containers. Dec 13 01:26:09.690021 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:26:09.690031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:09.690041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:26:09.690050 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:26:09.690060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:09.690070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:09.690081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:09.690091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:26:09.690100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:09.690110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:26:09.690120 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:26:09.690129 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:26:09.690140 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:26:09.690149 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:26:09.690161 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:26:09.690170 kernel: loop: module loaded Dec 13 01:26:09.690179 kernel: fuse: init (API version 7.39) Dec 13 01:26:09.690187 kernel: ACPI: bus type drm_connector registered Dec 13 01:26:09.690197 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:26:09.690206 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:26:09.690250 systemd-journald[1262]: Collecting audit messages is disabled. Dec 13 01:26:09.690277 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:26:09.690288 systemd-journald[1262]: Journal started Dec 13 01:26:09.690309 systemd-journald[1262]: Runtime Journal (/run/log/journal/e4b682f25238413aae5bf0fb7697b592) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:26:08.563384 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:26:08.716366 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:26:08.716777 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:26:08.717099 systemd[1]: systemd-journald.service: Consumed 3.937s CPU time. Dec 13 01:26:09.728192 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:26:09.738649 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:26:09.738729 systemd[1]: Stopped verity-setup.service. Dec 13 01:26:09.758017 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:26:09.758921 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:26:09.765447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:26:09.772059 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:26:09.778537 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:26:09.785270 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:26:09.792047 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:26:09.798261 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:26:09.807134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:26:09.814995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:26:09.815147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:26:09.822857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:09.823012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:09.830958 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:09.831109 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:09.837757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:09.837903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:09.845236 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:26:09.845376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:26:09.852270 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:09.853538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:09.860129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:26:09.868394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:26:09.876465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:26:09.884401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:26:09.901471 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:26:09.911532 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:26:09.919575 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:26:09.928642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:26:09.928687 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:26:09.936849 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:26:09.949593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:26:09.958011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:26:09.964647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:09.966358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:26:09.975164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:26:09.982868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:09.984980 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:26:09.992229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:09.993411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:10.002645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:26:10.018830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:26:10.041688 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:26:10.052385 systemd-journald[1262]: Time spent on flushing to /var/log/journal/e4b682f25238413aae5bf0fb7697b592 is 14.712ms for 899 entries. Dec 13 01:26:10.052385 systemd-journald[1262]: System Journal (/var/log/journal/e4b682f25238413aae5bf0fb7697b592) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:26:10.123647 systemd-journald[1262]: Received client request to flush runtime journal. Dec 13 01:26:10.123713 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:26:10.061952 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:26:10.076554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:26:10.085468 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:26:10.094612 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:26:10.109278 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:26:10.125842 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:26:10.134948 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:26:10.148757 udevadm[1297]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:26:10.188305 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:26:10.189112 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 13 01:26:10.189126 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 13 01:26:10.189223 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:26:10.202269 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:10.210122 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:26:10.230662 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:26:10.325569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:26:10.340874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:26:10.358745 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Dec 13 01:26:10.358764 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Dec 13 01:26:10.364543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:26:10.445464 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:26:10.494481 kernel: loop1: detected capacity change from 0 to 31320 Dec 13 01:26:10.986461 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:26:11.251546 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:26:11.263652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:26:11.294396 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Dec 13 01:26:11.333457 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 01:26:11.370449 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:26:11.382543 kernel: loop5: detected capacity change from 0 to 31320 Dec 13 01:26:11.385263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:26:11.405506 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 01:26:11.405749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:26:11.432492 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 01:26:11.439923 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:26:11.440710 (sd-merge)[1323]: Merged extensions into '/usr'. Dec 13 01:26:11.474623 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:26:11.474642 systemd[1]: Reloading... Dec 13 01:26:11.523539 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1332) Dec 13 01:26:11.540465 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1332) Dec 13 01:26:11.613982 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:26:11.614078 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:26:11.614098 zram_generator::config[1383]: No configuration found. Dec 13 01:26:11.614125 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:26:11.625458 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:26:11.666513 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:26:11.684804 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:26:11.684908 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:26:11.698486 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:26:11.707051 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:26:11.735524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1331) Dec 13 01:26:11.839298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:11.916409 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:26:11.916499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:26:11.925273 systemd[1]: Reloading finished in 450 ms. Dec 13 01:26:11.963067 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:26:12.001653 systemd[1]: Starting ensure-sysext.service... Dec 13 01:26:12.007919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:26:12.018673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:26:12.035676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:26:12.045815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:26:12.057163 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:26:12.067169 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:26:12.091884 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:26:12.099919 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:26:12.110554 systemd[1]: Reloading requested from client PID 1479 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:26:12.110572 systemd[1]: Reloading... Dec 13 01:26:12.115555 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:26:12.115847 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:26:12.116522 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:26:12.116750 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Dec 13 01:26:12.116796 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Dec 13 01:26:12.169019 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:12.169630 systemd-tmpfiles[1481]: Skipping /boot Dec 13 01:26:12.185249 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:26:12.185412 systemd-tmpfiles[1481]: Skipping /boot Dec 13 01:26:12.194103 zram_generator::config[1523]: No configuration found. Dec 13 01:26:12.202655 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:12.235760 systemd-networkd[1333]: lo: Link UP Dec 13 01:26:12.235773 systemd-networkd[1333]: lo: Gained carrier Dec 13 01:26:12.240674 systemd-networkd[1333]: Enumeration completed Dec 13 01:26:12.241637 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.241649 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:12.297459 kernel: mlx5_core b61e:00:02.0 enP46622s1: Link up Dec 13 01:26:12.338333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:12.341147 kernel: hv_netvsc 0022487a-7883-0022-487a-78830022487a eth0: Data path switched to VF: enP46622s1 Dec 13 01:26:12.340746 systemd-networkd[1333]: enP46622s1: Link UP Dec 13 01:26:12.340831 systemd-networkd[1333]: eth0: Link UP Dec 13 01:26:12.340834 systemd-networkd[1333]: eth0: Gained carrier Dec 13 01:26:12.340871 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:12.345829 systemd-networkd[1333]: enP46622s1: Gained carrier Dec 13 01:26:12.352532 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:12.418237 systemd[1]: Reloading finished in 307 ms. Dec 13 01:26:12.433138 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:26:12.449936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:26:12.458943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:26:12.468467 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:26:12.482398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:26:12.494729 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:12.524705 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:26:12.533346 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:26:12.543761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:26:12.550747 lvm[1594]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:26:12.560869 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:26:12.577790 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:26:12.586930 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:26:12.601586 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:26:12.617459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:12.627809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:12.639778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:12.649746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:12.656644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:12.659544 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:26:12.672524 augenrules[1611]: No rules Dec 13 01:26:12.679265 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:12.688268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:12.688748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:12.696573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:12.696831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:12.705934 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:12.706165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:12.716366 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:26:12.730331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:12.735708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:12.747464 systemd-resolved[1602]: Positive Trust Anchors: Dec 13 01:26:12.747485 systemd-resolved[1602]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:26:12.747518 systemd-resolved[1602]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:26:12.755518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:12.766048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:12.772929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:12.773974 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:12.775474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:12.775665 systemd-resolved[1602]: Using system hostname 'ci-4081.2.1-a-d903163327'. Dec 13 01:26:12.785068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:26:12.792120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:12.792391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:12.801642 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:12.801911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:12.814992 systemd[1]: Reached target network.target - Network. Dec 13 01:26:12.821475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:26:12.830384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:26:12.837856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:26:12.847397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:26:12.856132 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:26:12.870072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:26:12.877334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:26:12.877665 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:26:12.888374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:26:12.888643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:26:12.899477 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:26:12.901528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:26:12.911702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:26:12.911880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:26:12.921802 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:26:12.921941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:26:12.932137 systemd[1]: Finished ensure-sysext.service. Dec 13 01:26:12.941543 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:26:12.941625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:26:13.505749 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:26:13.515774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:26:13.588669 systemd-networkd[1333]: eth0: Gained IPv6LL Dec 13 01:26:13.591274 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:26:13.600240 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:26:14.036575 systemd-networkd[1333]: enP46622s1: Gained IPv6LL Dec 13 01:26:14.901187 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:26:14.914691 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:26:14.938639 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:26:14.953137 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:26:14.960653 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:26:14.967695 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:26:14.975846 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:26:14.984094 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:26:14.991000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:26:14.999672 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:26:15.007423 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:26:15.007505 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:26:15.013477 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:26:15.032585 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:26:15.042119 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:26:15.053126 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:26:15.062211 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:26:15.071712 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:26:15.080100 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:26:15.088914 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:15.088944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:26:15.102578 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:26:15.114600 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:26:15.132699 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:26:15.140883 (chronyd)[1646]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:26:15.144946 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:26:15.153115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:26:15.164532 jq[1652]: false Dec 13 01:26:15.165523 chronyd[1654]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:26:15.166688 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:26:15.173274 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:26:15.173323 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:26:15.174609 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:26:15.181145 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:26:15.185674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:15.188168 KVP[1656]: KVP starting; pid is:1656 Dec 13 01:26:15.196258 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:26:15.202678 chronyd[1654]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:26:15.202914 chronyd[1654]: Loaded seccomp filter (level 2) Dec 13 01:26:15.207604 KVP[1656]: KVP LIC Version: 3.1 Dec 13 01:26:15.208466 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:26:15.214807 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:26:15.226413 extend-filesystems[1655]: Found loop4 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found loop5 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found loop6 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found loop7 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda1 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda2 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda3 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found usr Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda4 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda6 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda7 Dec 13 01:26:15.226413 extend-filesystems[1655]: Found sda9 Dec 13 01:26:15.226413 extend-filesystems[1655]: Checking size of /dev/sda9 Dec 13 01:26:15.225867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.357 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.362 INFO Fetch successful Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.362 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.371 INFO Fetch successful Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.371 INFO Fetching http://168.63.129.16/machine/7fa80240-f870-4451-ad1b-c393100a0832/e183b212%2Dfbe4%2D4916%2D8815%2De442849e545e.%5Fci%2D4081.2.1%2Da%2Dd903163327?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.376 INFO Fetch successful Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.377 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:26:15.445715 coreos-metadata[1648]: Dec 13 01:26:15.402 INFO Fetch successful Dec 13 01:26:15.450001 extend-filesystems[1655]: Old size kept for /dev/sda9 Dec 13 01:26:15.450001 extend-filesystems[1655]: Found sr0 Dec 13 01:26:15.242833 dbus-daemon[1649]: [system] SELinux support is enabled Dec 13 01:26:15.246150 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:26:15.265659 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:26:15.299679 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:26:15.307962 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:26:15.308534 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:26:15.493621 update_engine[1683]: I20241213 01:26:15.389645 1683 main.cc:92] Flatcar Update Engine starting Dec 13 01:26:15.493621 update_engine[1683]: I20241213 01:26:15.392809 1683 update_check_scheduler.cc:74] Next update check in 2m27s Dec 13 01:26:15.325698 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:26:15.493919 jq[1686]: true Dec 13 01:26:15.343785 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:26:15.364187 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:26:15.394598 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:26:15.427897 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:26:15.428063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:26:15.428326 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:26:15.428488 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:26:15.430091 systemd-logind[1679]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 01:26:15.433649 systemd-logind[1679]: New seat seat0. Dec 13 01:26:15.449175 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:26:15.480058 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:26:15.480642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:26:15.504227 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:26:15.526188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:26:15.526401 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:26:15.553208 (ntainerd)[1717]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:26:15.568797 dbus-daemon[1649]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:26:15.571350 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:26:15.576397 jq[1716]: true Dec 13 01:26:15.591322 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:26:15.603461 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1687) Dec 13 01:26:15.607125 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:26:15.607326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:26:15.607481 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:26:15.621356 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:26:15.621508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:26:15.646536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:26:15.662926 tar[1707]: linux-arm64/helm Dec 13 01:26:15.771648 bash[1763]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:26:15.774818 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:26:15.788679 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:26:15.892655 locksmithd[1745]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:26:16.145716 tar[1707]: linux-arm64/LICENSE Dec 13 01:26:16.145809 tar[1707]: linux-arm64/README.md Dec 13 01:26:16.166902 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:26:16.235465 containerd[1717]: time="2024-12-13T01:26:16.235065740Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:26:16.286721 containerd[1717]: time="2024-12-13T01:26:16.286662900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288050460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288095340Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288113020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288271100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288287820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288349940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288383 containerd[1717]: time="2024-12-13T01:26:16.288362980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288597 containerd[1717]: time="2024-12-13T01:26:16.288555780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288597 containerd[1717]: time="2024-12-13T01:26:16.288572140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288597 containerd[1717]: time="2024-12-13T01:26:16.288585620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288597 containerd[1717]: time="2024-12-13T01:26:16.288595820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288697 containerd[1717]: time="2024-12-13T01:26:16.288672540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.288897 containerd[1717]: time="2024-12-13T01:26:16.288874140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:26:16.289007 containerd[1717]: time="2024-12-13T01:26:16.288983260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:26:16.289007 containerd[1717]: time="2024-12-13T01:26:16.289003420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:26:16.289103 containerd[1717]: time="2024-12-13T01:26:16.289082180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:26:16.289148 containerd[1717]: time="2024-12-13T01:26:16.289129740Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302323060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302387020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302406900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302423140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302452380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:26:16.302750 containerd[1717]: time="2024-12-13T01:26:16.302608420Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302827900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302916460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302931700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302945620Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302959300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302973540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.302987 containerd[1717]: time="2024-12-13T01:26:16.302986780Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303000860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303016220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303032820Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303046180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303058060Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303075980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303091860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303107 containerd[1717]: time="2024-12-13T01:26:16.303105220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303123620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303136700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303149660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303161380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303173580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303187380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303208060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303223340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303235540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303247980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303264780Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:26:16.303282 containerd[1717]: time="2024-12-13T01:26:16.303284660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303296940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303307900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303352180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303368780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303378940Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303390340Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303399980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303411500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303420940Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:26:16.304158 containerd[1717]: time="2024-12-13T01:26:16.303456820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:26:16.304342 containerd[1717]: time="2024-12-13T01:26:16.303742180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:26:16.304342 containerd[1717]: time="2024-12-13T01:26:16.303798420Z" level=info msg="Connect containerd service" Dec 13 01:26:16.304342 containerd[1717]: time="2024-12-13T01:26:16.303832860Z" level=info msg="using legacy CRI server" Dec 13 01:26:16.304342 containerd[1717]: time="2024-12-13T01:26:16.303839380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:26:16.304342 containerd[1717]: time="2024-12-13T01:26:16.303924460Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.304819220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.304963380Z" level=info msg="Start subscribing containerd event" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305021300Z" level=info msg="Start recovering state" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305080620Z" level=info msg="Start event monitor" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305101540Z" level=info msg="Start snapshots syncer" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305111100Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305117980Z" level=info msg="Start streaming server" Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305547500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:26:16.307454 containerd[1717]: time="2024-12-13T01:26:16.305623780Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:26:16.305771 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:26:16.316320 containerd[1717]: time="2024-12-13T01:26:16.316159900Z" level=info msg="containerd successfully booted in 0.081870s" Dec 13 01:26:16.402613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:16.418790 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:16.798649 kubelet[1786]: E1213 01:26:16.798560 1786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:16.803274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:16.803444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:17.094641 sshd_keygen[1682]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:26:17.113197 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:26:17.125755 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:26:17.135870 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:26:17.144562 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:26:17.144736 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:26:17.162486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:26:17.171593 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:26:17.192754 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:26:17.203720 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:26:17.216765 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:26:17.225042 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:26:17.231306 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:26:17.239335 systemd[1]: Startup finished in 706ms (kernel) + 12.475s (initrd) + 11.747s (userspace) = 24.929s. Dec 13 01:26:17.552917 login[1816]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 01:26:17.569510 login[1817]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:17.577607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:17.584961 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:17.591037 systemd-logind[1679]: New session 1 of user core. Dec 13 01:26:17.596869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:17.601769 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:17.609890 (systemd)[1824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:17.738773 systemd[1824]: Queued start job for default target default.target. Dec 13 01:26:17.746382 systemd[1824]: Created slice app.slice - User Application Slice. Dec 13 01:26:17.746441 systemd[1824]: Reached target paths.target - Paths. Dec 13 01:26:17.746460 systemd[1824]: Reached target timers.target - Timers. Dec 13 01:26:17.748608 systemd[1824]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:17.759978 systemd[1824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:17.760257 systemd[1824]: Reached target sockets.target - Sockets. Dec 13 01:26:17.760346 systemd[1824]: Reached target basic.target - Basic System. Dec 13 01:26:17.760511 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:17.760694 systemd[1824]: Reached target default.target - Main User Target. Dec 13 01:26:17.760859 systemd[1824]: Startup finished in 144ms. Dec 13 01:26:17.768635 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:18.553296 login[1816]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:18.557371 systemd-logind[1679]: New session 2 of user core. Dec 13 01:26:18.563606 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:18.754117 waagent[1813]: 2024-12-13T01:26:18.748398Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:26:18.754424 waagent[1813]: 2024-12-13T01:26:18.754356Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:26:18.759336 waagent[1813]: 2024-12-13T01:26:18.759269Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:26:18.763852 waagent[1813]: 2024-12-13T01:26:18.763788Z INFO Daemon Daemon Run daemon Dec 13 01:26:18.768311 waagent[1813]: 2024-12-13T01:26:18.768260Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:26:18.778465 waagent[1813]: 2024-12-13T01:26:18.778375Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:26:18.784956 waagent[1813]: 2024-12-13T01:26:18.784901Z INFO Daemon Daemon Activate resource disk Dec 13 01:26:18.790366 waagent[1813]: 2024-12-13T01:26:18.790309Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:26:18.801546 waagent[1813]: 2024-12-13T01:26:18.801476Z INFO Daemon Daemon Found device: None Dec 13 01:26:18.806667 waagent[1813]: 2024-12-13T01:26:18.806575Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:26:18.815473 waagent[1813]: 2024-12-13T01:26:18.815403Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:26:18.829115 waagent[1813]: 2024-12-13T01:26:18.829047Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:18.836160 waagent[1813]: 2024-12-13T01:26:18.836101Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:26:18.849526 waagent[1813]: 2024-12-13T01:26:18.849442Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:26:18.865182 waagent[1813]: 2024-12-13T01:26:18.865109Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:26:18.875093 waagent[1813]: 2024-12-13T01:26:18.875026Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:26:18.880779 waagent[1813]: 2024-12-13T01:26:18.880720Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:26:18.972316 waagent[1813]: 2024-12-13T01:26:18.971266Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:26:18.989748 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:26:18.992135 waagent[1813]: 2024-12-13T01:26:18.992052Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:26:18.997755 waagent[1813]: 2024-12-13T01:26:18.997688Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:19.003551 waagent[1813]: 2024-12-13T01:26:19.003487Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:26:19.010555 waagent[1813]: 2024-12-13T01:26:19.010493Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:26:19.015882 waagent[1813]: 2024-12-13T01:26:19.015824Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:26:19.021039 waagent[1813]: 2024-12-13T01:26:19.020983Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:26:19.065448 waagent[1813]: 2024-12-13T01:26:19.065348Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:26:19.072857 waagent[1813]: 2024-12-13T01:26:19.072826Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:26:19.078262 waagent[1813]: 2024-12-13T01:26:19.078197Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:26:19.539199 waagent[1813]: 2024-12-13T01:26:19.539088Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:26:19.546400 waagent[1813]: 2024-12-13T01:26:19.546327Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:26:19.556510 waagent[1813]: 2024-12-13T01:26:19.556452Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:19.583218 waagent[1813]: 2024-12-13T01:26:19.583167Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:26:19.589686 waagent[1813]: 2024-12-13T01:26:19.589637Z INFO Daemon Dec 13 01:26:19.592738 waagent[1813]: 2024-12-13T01:26:19.592684Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 13c4cbec-7b4f-4d61-914f-99a02ca7bb33 eTag: 9202752875275590443 source: Fabric] Dec 13 01:26:19.606191 waagent[1813]: 2024-12-13T01:26:19.606138Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:19.614727 waagent[1813]: 2024-12-13T01:26:19.614678Z INFO Daemon Dec 13 01:26:19.618154 waagent[1813]: 2024-12-13T01:26:19.618105Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:19.630802 waagent[1813]: 2024-12-13T01:26:19.630762Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:26:19.730533 waagent[1813]: 2024-12-13T01:26:19.730417Z INFO Daemon Downloaded certificate {'thumbprint': 'DD9968CEA760749E081E6FEFE76C870E2CCA5228', 'hasPrivateKey': False} Dec 13 01:26:19.742314 waagent[1813]: 2024-12-13T01:26:19.742261Z INFO Daemon Downloaded certificate {'thumbprint': '2E0554C021B5405497F31D4F747D7E89DA11AD3B', 'hasPrivateKey': True} Dec 13 01:26:19.753017 waagent[1813]: 2024-12-13T01:26:19.752960Z INFO Daemon Fetch goal state completed Dec 13 01:26:19.765969 waagent[1813]: 2024-12-13T01:26:19.765925Z INFO Daemon Daemon Starting provisioning Dec 13 01:26:19.771792 waagent[1813]: 2024-12-13T01:26:19.771721Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:26:19.777377 waagent[1813]: 2024-12-13T01:26:19.777322Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-d903163327] Dec 13 01:26:19.803461 waagent[1813]: 2024-12-13T01:26:19.803297Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-d903163327] Dec 13 01:26:19.815467 waagent[1813]: 2024-12-13T01:26:19.810516Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:26:19.817828 waagent[1813]: 2024-12-13T01:26:19.817767Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:26:19.856951 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:19.856959 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:19.856988 systemd-networkd[1333]: eth0: DHCP lease lost Dec 13 01:26:19.858569 waagent[1813]: 2024-12-13T01:26:19.858105Z INFO Daemon Daemon Create user account if not exists Dec 13 01:26:19.864507 waagent[1813]: 2024-12-13T01:26:19.864421Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:26:19.871911 waagent[1813]: 2024-12-13T01:26:19.871845Z INFO Daemon Daemon Configure sudoer Dec 13 01:26:19.871992 systemd-networkd[1333]: eth0: DHCPv6 lease lost Dec 13 01:26:19.878321 waagent[1813]: 2024-12-13T01:26:19.878246Z INFO Daemon Daemon Configure sshd Dec 13 01:26:19.883631 waagent[1813]: 2024-12-13T01:26:19.883568Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:26:19.899704 waagent[1813]: 2024-12-13T01:26:19.899626Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:26:19.913503 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:21.012464 waagent[1813]: 2024-12-13T01:26:21.008985Z INFO Daemon Daemon Provisioning complete Dec 13 01:26:21.029049 waagent[1813]: 2024-12-13T01:26:21.028998Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:26:21.035565 waagent[1813]: 2024-12-13T01:26:21.035250Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:26:21.044804 waagent[1813]: 2024-12-13T01:26:21.044742Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:26:21.186152 waagent[1879]: 2024-12-13T01:26:21.185271Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:26:21.186152 waagent[1879]: 2024-12-13T01:26:21.185502Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:26:21.186152 waagent[1879]: 2024-12-13T01:26:21.185583Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:26:21.220163 waagent[1879]: 2024-12-13T01:26:21.220078Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:26:21.220540 waagent[1879]: 2024-12-13T01:26:21.220498Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:21.220696 waagent[1879]: 2024-12-13T01:26:21.220662Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:21.229765 waagent[1879]: 2024-12-13T01:26:21.229683Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:21.242121 waagent[1879]: 2024-12-13T01:26:21.242073Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:26:21.242834 waagent[1879]: 2024-12-13T01:26:21.242791Z INFO ExtHandler Dec 13 01:26:21.243019 waagent[1879]: 2024-12-13T01:26:21.242985Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f2662d7e-a1a2-4cb9-9cef-4a453f08ebef eTag: 9202752875275590443 source: Fabric] Dec 13 01:26:21.243422 waagent[1879]: 2024-12-13T01:26:21.243384Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:21.244121 waagent[1879]: 2024-12-13T01:26:21.244078Z INFO ExtHandler Dec 13 01:26:21.244257 waagent[1879]: 2024-12-13T01:26:21.244227Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:21.249468 waagent[1879]: 2024-12-13T01:26:21.248772Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:26:21.329068 waagent[1879]: 2024-12-13T01:26:21.328914Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DD9968CEA760749E081E6FEFE76C870E2CCA5228', 'hasPrivateKey': False} Dec 13 01:26:21.329454 waagent[1879]: 2024-12-13T01:26:21.329394Z INFO ExtHandler Downloaded certificate {'thumbprint': '2E0554C021B5405497F31D4F747D7E89DA11AD3B', 'hasPrivateKey': True} Dec 13 01:26:21.329920 waagent[1879]: 2024-12-13T01:26:21.329849Z INFO ExtHandler Fetch goal state completed Dec 13 01:26:21.350106 waagent[1879]: 2024-12-13T01:26:21.350040Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1879 Dec 13 01:26:21.350268 waagent[1879]: 2024-12-13T01:26:21.350232Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:26:21.351991 waagent[1879]: 2024-12-13T01:26:21.351941Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:26:21.352390 waagent[1879]: 2024-12-13T01:26:21.352353Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:26:21.385770 waagent[1879]: 2024-12-13T01:26:21.385722Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:26:21.385988 waagent[1879]: 2024-12-13T01:26:21.385946Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:26:21.392510 waagent[1879]: 2024-12-13T01:26:21.392011Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:26:21.398680 systemd[1]: Reloading requested from client PID 1894 ('systemctl') (unit waagent.service)... Dec 13 01:26:21.398964 systemd[1]: Reloading... Dec 13 01:26:21.486541 zram_generator::config[1931]: No configuration found. Dec 13 01:26:21.588952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:21.667091 systemd[1]: Reloading finished in 267 ms. Dec 13 01:26:21.691464 waagent[1879]: 2024-12-13T01:26:21.688577Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:26:21.695883 systemd[1]: Reloading requested from client PID 1982 ('systemctl') (unit waagent.service)... Dec 13 01:26:21.695992 systemd[1]: Reloading... Dec 13 01:26:21.798591 zram_generator::config[2019]: No configuration found. Dec 13 01:26:21.901657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:21.979284 systemd[1]: Reloading finished in 282 ms. Dec 13 01:26:22.003567 waagent[1879]: 2024-12-13T01:26:22.002769Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:26:22.003567 waagent[1879]: 2024-12-13T01:26:22.002948Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:26:22.333016 waagent[1879]: 2024-12-13T01:26:22.332924Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:26:22.333626 waagent[1879]: 2024-12-13T01:26:22.333573Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:26:22.334455 waagent[1879]: 2024-12-13T01:26:22.334364Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:26:22.334588 waagent[1879]: 2024-12-13T01:26:22.334515Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:22.334767 waagent[1879]: 2024-12-13T01:26:22.334719Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:22.335199 waagent[1879]: 2024-12-13T01:26:22.335140Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:26:22.335621 waagent[1879]: 2024-12-13T01:26:22.335497Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:26:22.335971 waagent[1879]: 2024-12-13T01:26:22.335914Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:26:22.336131 waagent[1879]: 2024-12-13T01:26:22.336089Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:22.336207 waagent[1879]: 2024-12-13T01:26:22.336176Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:22.336347 waagent[1879]: 2024-12-13T01:26:22.336307Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:26:22.336408 waagent[1879]: 2024-12-13T01:26:22.336380Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:26:22.336482 waagent[1879]: 2024-12-13T01:26:22.336453Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:26:22.337339 waagent[1879]: 2024-12-13T01:26:22.337280Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:26:22.337339 waagent[1879]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:26:22.337339 waagent[1879]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:26:22.337339 waagent[1879]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:26:22.337339 waagent[1879]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:22.337339 waagent[1879]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:22.337339 waagent[1879]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:22.338620 waagent[1879]: 2024-12-13T01:26:22.337077Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:26:22.339336 waagent[1879]: 2024-12-13T01:26:22.338766Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:26:22.339336 waagent[1879]: 2024-12-13T01:26:22.338844Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:26:22.339463 waagent[1879]: 2024-12-13T01:26:22.339338Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:26:22.344804 waagent[1879]: 2024-12-13T01:26:22.344625Z INFO ExtHandler ExtHandler Dec 13 01:26:22.344804 waagent[1879]: 2024-12-13T01:26:22.344743Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 19e3ab95-20de-4c08-b5a1-252b9b07ddd4 correlation 81fe602b-a82e-4163-b1ab-ce4af361c940 created: 2024-12-13T01:25:08.154518Z] Dec 13 01:26:22.345544 waagent[1879]: 2024-12-13T01:26:22.345488Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:26:22.346365 waagent[1879]: 2024-12-13T01:26:22.346320Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 13 01:26:22.379640 waagent[1879]: 2024-12-13T01:26:22.379566Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:26:22.379640 waagent[1879]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:26:22.379640 waagent[1879]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:26:22.379640 waagent[1879]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:78:83 brd ff:ff:ff:ff:ff:ff Dec 13 01:26:22.379640 waagent[1879]: 3: enP46622s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:78:83 brd ff:ff:ff:ff:ff:ff\ altname enP46622p0s2 Dec 13 01:26:22.379640 waagent[1879]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:26:22.379640 waagent[1879]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:26:22.379640 waagent[1879]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:26:22.379640 waagent[1879]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:26:22.379640 waagent[1879]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:26:22.379640 waagent[1879]: 2: eth0 inet6 fe80::222:48ff:fe7a:7883/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:22.379640 waagent[1879]: 3: enP46622s1 inet6 fe80::222:48ff:fe7a:7883/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:22.396345 waagent[1879]: 2024-12-13T01:26:22.396223Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7D42159F-917C-45A7-8A27-C715D49BFBD9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:26:22.432567 waagent[1879]: 2024-12-13T01:26:22.432332Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:26:22.432567 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.432567 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.432567 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.432567 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.432567 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.432567 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.432567 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:22.432567 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:22.432567 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:22.435597 waagent[1879]: 2024-12-13T01:26:22.435523Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:26:22.435597 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.435597 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.435597 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.435597 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.435597 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:22.435597 waagent[1879]: pkts bytes target prot opt in out source destination Dec 13 01:26:22.435597 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:22.435597 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:22.435597 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:22.435967 waagent[1879]: 2024-12-13T01:26:22.435836Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:26:26.973479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:26.983703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:27.071603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:27.083788 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:27.126563 kubelet[2112]: E1213 01:26:27.126500 2112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:27.129401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:27.129545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:37.223607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:37.231598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:37.327054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:37.331595 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:37.375793 kubelet[2128]: E1213 01:26:37.375686 2128 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:37.378360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:37.378532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:38.993547 chronyd[1654]: Selected source PHC0 Dec 13 01:26:47.473567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:26:47.481606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:47.566420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:47.570305 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:47.666705 kubelet[2144]: E1213 01:26:47.666636 2144 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:47.668933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:47.669060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:57.723727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:26:57.734635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:57.839275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:57.843644 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:57.883845 kubelet[2159]: E1213 01:26:57.883776 2159 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:57.886783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:57.887062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:59.715142 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:27:00.335456 update_engine[1683]: I20241213 01:27:00.335173 1683 update_attempter.cc:509] Updating boot flags... Dec 13 01:27:00.399731 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2179) Dec 13 01:27:00.501477 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2179) Dec 13 01:27:07.973500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:27:07.980631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:08.066814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:08.073757 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:08.116663 kubelet[2241]: E1213 01:27:08.116601 2241 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:08.118821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:08.118947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:15.083328 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:27:15.091714 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:52682.service - OpenSSH per-connection server daemon (10.200.16.10:52682). Dec 13 01:27:15.640856 sshd[2251]: Accepted publickey for core from 10.200.16.10 port 52682 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:15.642139 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:15.645735 systemd-logind[1679]: New session 3 of user core. Dec 13 01:27:15.655566 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:27:16.054888 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:52694.service - OpenSSH per-connection server daemon (10.200.16.10:52694). Dec 13 01:27:16.488582 sshd[2256]: Accepted publickey for core from 10.200.16.10 port 52694 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:16.489882 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:16.494565 systemd-logind[1679]: New session 4 of user core. Dec 13 01:27:16.499647 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:27:16.821652 sshd[2256]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:16.824318 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:52694.service: Deactivated successfully. Dec 13 01:27:16.825947 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:27:16.827180 systemd-logind[1679]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:27:16.828145 systemd-logind[1679]: Removed session 4. Dec 13 01:27:16.901292 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:52704.service - OpenSSH per-connection server daemon (10.200.16.10:52704). Dec 13 01:27:17.335224 sshd[2263]: Accepted publickey for core from 10.200.16.10 port 52704 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:17.336529 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:17.341208 systemd-logind[1679]: New session 5 of user core. Dec 13 01:27:17.346577 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:27:17.664362 sshd[2263]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:17.668058 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:52704.service: Deactivated successfully. Dec 13 01:27:17.669704 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:27:17.670380 systemd-logind[1679]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:27:17.671155 systemd-logind[1679]: Removed session 5. Dec 13 01:27:17.746925 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:52716.service - OpenSSH per-connection server daemon (10.200.16.10:52716). Dec 13 01:27:18.181342 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 52716 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:18.182669 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:18.183586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:27:18.191623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:18.197759 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:27:18.198066 systemd-logind[1679]: New session 6 of user core. Dec 13 01:27:18.292622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:18.295550 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:18.341309 kubelet[2281]: E1213 01:27:18.341240 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:18.343631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:18.343766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:18.514718 sshd[2270]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:18.518643 systemd-logind[1679]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:27:18.519200 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:52716.service: Deactivated successfully. Dec 13 01:27:18.520851 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:27:18.521642 systemd-logind[1679]: Removed session 6. Dec 13 01:27:18.593990 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:34334.service - OpenSSH per-connection server daemon (10.200.16.10:34334). Dec 13 01:27:19.020599 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 34334 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:19.022005 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:19.025671 systemd-logind[1679]: New session 7 of user core. Dec 13 01:27:19.033613 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:27:19.358304 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:27:19.358600 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:19.385170 sudo[2296]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:19.464307 sshd[2293]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:19.467208 systemd-logind[1679]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:27:19.467767 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:27:19.468312 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:34334.service: Deactivated successfully. Dec 13 01:27:19.470838 systemd-logind[1679]: Removed session 7. Dec 13 01:27:19.552664 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:34344.service - OpenSSH per-connection server daemon (10.200.16.10:34344). Dec 13 01:27:19.981839 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 34344 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:19.983159 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:19.986823 systemd-logind[1679]: New session 8 of user core. Dec 13 01:27:19.997555 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:27:20.231154 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:27:20.231744 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:20.234855 sudo[2305]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:20.239183 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:27:20.239424 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:20.249661 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:20.252263 auditctl[2308]: No rules Dec 13 01:27:20.252120 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:27:20.252567 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:20.254898 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:20.276284 augenrules[2326]: No rules Dec 13 01:27:20.277709 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:20.279046 sudo[2304]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:20.363315 sshd[2301]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:20.365955 systemd-logind[1679]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:27:20.367386 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:34344.service: Deactivated successfully. Dec 13 01:27:20.368993 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:27:20.369914 systemd-logind[1679]: Removed session 8. Dec 13 01:27:20.439790 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:34354.service - OpenSSH per-connection server daemon (10.200.16.10:34354). Dec 13 01:27:20.866165 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 34354 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:27:20.867553 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:20.872261 systemd-logind[1679]: New session 9 of user core. Dec 13 01:27:20.877703 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:27:21.110724 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:27:21.110993 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:27:22.146840 (dockerd)[2353]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:27:22.146842 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:27:22.774998 dockerd[2353]: time="2024-12-13T01:27:22.774723540Z" level=info msg="Starting up" Dec 13 01:27:23.174066 dockerd[2353]: time="2024-12-13T01:27:23.173602223Z" level=info msg="Loading containers: start." Dec 13 01:27:23.310456 kernel: Initializing XFRM netlink socket Dec 13 01:27:23.424853 systemd-networkd[1333]: docker0: Link UP Dec 13 01:27:23.459698 dockerd[2353]: time="2024-12-13T01:27:23.459660606Z" level=info msg="Loading containers: done." Dec 13 01:27:23.483192 dockerd[2353]: time="2024-12-13T01:27:23.483028472Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:27:23.483192 dockerd[2353]: time="2024-12-13T01:27:23.483144592Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:27:23.483698 dockerd[2353]: time="2024-12-13T01:27:23.483509311Z" level=info msg="Daemon has completed initialization" Dec 13 01:27:23.538014 dockerd[2353]: time="2024-12-13T01:27:23.537931946Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:27:23.538590 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:27:25.123572 containerd[1717]: time="2024-12-13T01:27:25.123531263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:27:26.243477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295061035.mount: Deactivated successfully. Dec 13 01:27:28.060297 containerd[1717]: time="2024-12-13T01:27:28.060242368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.063729 containerd[1717]: time="2024-12-13T01:27:28.063693434Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:27:28.067925 containerd[1717]: time="2024-12-13T01:27:28.066559257Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.072036 containerd[1717]: time="2024-12-13T01:27:28.072003486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:28.073231 containerd[1717]: time="2024-12-13T01:27:28.073193484Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.949621701s" Dec 13 01:27:28.073231 containerd[1717]: time="2024-12-13T01:27:28.073230844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:27:28.095543 containerd[1717]: time="2024-12-13T01:27:28.095503160Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:27:28.473417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:27:28.483612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:28.568012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:28.571896 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:28.609517 kubelet[2560]: E1213 01:27:28.609405 2560 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:28.611678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:28.611801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:29.840468 containerd[1717]: time="2024-12-13T01:27:29.840147355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:29.870240 containerd[1717]: time="2024-12-13T01:27:29.870205416Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:27:29.888554 containerd[1717]: time="2024-12-13T01:27:29.888504939Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:29.896235 containerd[1717]: time="2024-12-13T01:27:29.896178644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:29.897017 containerd[1717]: time="2024-12-13T01:27:29.896980923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.801438763s" Dec 13 01:27:29.897017 containerd[1717]: time="2024-12-13T01:27:29.897014763Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:27:29.916661 containerd[1717]: time="2024-12-13T01:27:29.916619444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:27:31.553895 containerd[1717]: time="2024-12-13T01:27:31.553835451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:31.557245 containerd[1717]: time="2024-12-13T01:27:31.557207365Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:27:31.562471 containerd[1717]: time="2024-12-13T01:27:31.562135795Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:31.568446 containerd[1717]: time="2024-12-13T01:27:31.568371143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:31.569721 containerd[1717]: time="2024-12-13T01:27:31.569486860Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.652826696s" Dec 13 01:27:31.569721 containerd[1717]: time="2024-12-13T01:27:31.569519140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:27:31.591561 containerd[1717]: time="2024-12-13T01:27:31.591511057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:27:32.727605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678954614.mount: Deactivated successfully. Dec 13 01:27:33.040361 containerd[1717]: time="2024-12-13T01:27:33.039627438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.042158 containerd[1717]: time="2024-12-13T01:27:33.042123993Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:27:33.044820 containerd[1717]: time="2024-12-13T01:27:33.044770348Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.053541 containerd[1717]: time="2024-12-13T01:27:33.052483972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.053541 containerd[1717]: time="2024-12-13T01:27:33.053153531Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.461597674s" Dec 13 01:27:33.053541 containerd[1717]: time="2024-12-13T01:27:33.053178931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:27:33.072286 containerd[1717]: time="2024-12-13T01:27:33.072215893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:27:33.748984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296008567.mount: Deactivated successfully. Dec 13 01:27:34.868137 containerd[1717]: time="2024-12-13T01:27:34.868067028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.872049 containerd[1717]: time="2024-12-13T01:27:34.871791220Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:27:34.876262 containerd[1717]: time="2024-12-13T01:27:34.876212212Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.881933 containerd[1717]: time="2024-12-13T01:27:34.881874120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:34.883061 containerd[1717]: time="2024-12-13T01:27:34.882924958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.810621145s" Dec 13 01:27:34.883061 containerd[1717]: time="2024-12-13T01:27:34.882960638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:27:34.903092 containerd[1717]: time="2024-12-13T01:27:34.902856359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:27:35.483632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290941191.mount: Deactivated successfully. Dec 13 01:27:35.511576 containerd[1717]: time="2024-12-13T01:27:35.511522658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.514519 containerd[1717]: time="2024-12-13T01:27:35.514488412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:27:35.521055 containerd[1717]: time="2024-12-13T01:27:35.521009719Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.526774 containerd[1717]: time="2024-12-13T01:27:35.526730948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:35.527693 containerd[1717]: time="2024-12-13T01:27:35.527385187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 624.489588ms" Dec 13 01:27:35.527693 containerd[1717]: time="2024-12-13T01:27:35.527417107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:27:35.546176 containerd[1717]: time="2024-12-13T01:27:35.545944951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:27:36.927738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974044209.mount: Deactivated successfully. Dec 13 01:27:38.621884 containerd[1717]: time="2024-12-13T01:27:38.621826063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:38.626350 containerd[1717]: time="2024-12-13T01:27:38.626304574Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:27:38.631946 containerd[1717]: time="2024-12-13T01:27:38.631894923Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:38.639809 containerd[1717]: time="2024-12-13T01:27:38.639749228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:38.641139 containerd[1717]: time="2024-12-13T01:27:38.640851186Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.094869955s" Dec 13 01:27:38.641139 containerd[1717]: time="2024-12-13T01:27:38.640887346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:27:38.671893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:27:38.683247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:39.135749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:39.140339 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:39.185087 kubelet[2717]: E1213 01:27:39.185013 2717 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:39.187371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:39.187515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:44.987755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:44.999855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:45.024789 systemd[1]: Reloading requested from client PID 2779 ('systemctl') (unit session-9.scope)... Dec 13 01:27:45.024805 systemd[1]: Reloading... Dec 13 01:27:45.141467 zram_generator::config[2819]: No configuration found. Dec 13 01:27:45.238270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:45.314462 systemd[1]: Reloading finished in 288 ms. Dec 13 01:27:45.352823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:27:45.352901 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:27:45.353182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:45.359742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:45.558112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:45.567708 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:45.611070 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:45.611070 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:45.611070 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:45.612058 kubelet[2886]: I1213 01:27:45.612009 2886 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:46.421462 kubelet[2886]: I1213 01:27:46.421086 2886 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:46.421462 kubelet[2886]: I1213 01:27:46.421152 2886 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:46.421613 kubelet[2886]: I1213 01:27:46.421478 2886 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:46.438280 kubelet[2886]: E1213 01:27:46.438230 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.438646 kubelet[2886]: I1213 01:27:46.438621 2886 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:46.446533 kubelet[2886]: I1213 01:27:46.446507 2886 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:46.447930 kubelet[2886]: I1213 01:27:46.447904 2886 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:46.448116 kubelet[2886]: I1213 01:27:46.448098 2886 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:46.448197 kubelet[2886]: I1213 01:27:46.448123 2886 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:46.448197 kubelet[2886]: I1213 01:27:46.448131 2886 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:46.449376 kubelet[2886]: I1213 01:27:46.449354 2886 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:46.451672 kubelet[2886]: I1213 01:27:46.451653 2886 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:46.451717 kubelet[2886]: I1213 01:27:46.451678 2886 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:46.452053 kubelet[2886]: I1213 01:27:46.452029 2886 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:46.452103 kubelet[2886]: I1213 01:27:46.452060 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:46.453456 kubelet[2886]: W1213 01:27:46.453317 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.453456 kubelet[2886]: E1213 01:27:46.453368 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.454963 kubelet[2886]: W1213 01:27:46.454924 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.455034 kubelet[2886]: E1213 01:27:46.454980 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.455761 kubelet[2886]: I1213 01:27:46.455343 2886 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:46.455761 kubelet[2886]: I1213 01:27:46.455639 2886 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:46.457061 kubelet[2886]: W1213 01:27:46.456220 2886 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:46.457061 kubelet[2886]: I1213 01:27:46.456724 2886 server.go:1256] "Started kubelet" Dec 13 01:27:46.458588 kubelet[2886]: I1213 01:27:46.458570 2886 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:46.460074 kubelet[2886]: I1213 01:27:46.460039 2886 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:46.461204 kubelet[2886]: I1213 01:27:46.461089 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:46.461584 kubelet[2886]: I1213 01:27:46.461565 2886 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:46.462731 kubelet[2886]: I1213 01:27:46.462701 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:46.467050 kubelet[2886]: E1213 01:27:46.467010 2886 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-d903163327.1810984228c1f0ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-d903163327,UID:ci-4081.2.1-a-d903163327,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-d903163327,},FirstTimestamp:2024-12-13 01:27:46.45670321 +0000 UTC m=+0.885808599,LastTimestamp:2024-12-13 01:27:46.45670321 +0000 UTC m=+0.885808599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-d903163327,}" Dec 13 01:27:46.469088 kubelet[2886]: I1213 01:27:46.469052 2886 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:46.469344 kubelet[2886]: I1213 01:27:46.469313 2886 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:46.470016 kubelet[2886]: I1213 01:27:46.469983 2886 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:46.471563 kubelet[2886]: W1213 01:27:46.471519 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.471563 kubelet[2886]: E1213 01:27:46.471568 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.473054 kubelet[2886]: E1213 01:27:46.472856 2886 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:46.473054 kubelet[2886]: E1213 01:27:46.473030 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-d903163327?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Dec 13 01:27:46.473802 kubelet[2886]: I1213 01:27:46.473764 2886 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:46.475455 kubelet[2886]: I1213 01:27:46.474355 2886 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:46.476277 kubelet[2886]: I1213 01:27:46.476258 2886 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:46.498517 kubelet[2886]: I1213 01:27:46.498491 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:46.499920 kubelet[2886]: I1213 01:27:46.499903 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:46.500025 kubelet[2886]: I1213 01:27:46.500015 2886 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:46.500087 kubelet[2886]: I1213 01:27:46.500080 2886 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:46.500173 kubelet[2886]: E1213 01:27:46.500165 2886 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:46.502647 kubelet[2886]: W1213 01:27:46.502580 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.502762 kubelet[2886]: E1213 01:27:46.502751 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:46.601525 kubelet[2886]: E1213 01:27:46.601007 2886 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:46.674236 kubelet[2886]: E1213 01:27:46.674142 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-d903163327?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Dec 13 01:27:46.785423 kubelet[2886]: I1213 01:27:46.785140 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:46.785708 kubelet[2886]: E1213 01:27:46.785593 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:46.785881 kubelet[2886]: I1213 01:27:46.785861 2886 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:46.785881 kubelet[2886]: I1213 01:27:46.785875 2886 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:46.785954 kubelet[2886]: I1213 01:27:46.785891 2886 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:46.802083 kubelet[2886]: E1213 01:27:46.802047 2886 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:46.987949 kubelet[2886]: I1213 01:27:46.987920 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:46.988275 kubelet[2886]: E1213 01:27:46.988248 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:47.074854 kubelet[2886]: E1213 01:27:47.074823 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-d903163327?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Dec 13 01:27:47.202408 kubelet[2886]: E1213 01:27:47.202366 2886 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:47.307246 kubelet[2886]: W1213 01:27:47.307148 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:47.307246 kubelet[2886]: E1213 01:27:47.307215 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:47.390622 kubelet[2886]: I1213 01:27:47.390484 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:47.390806 kubelet[2886]: E1213 01:27:47.390783 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:47.434284 kubelet[2886]: W1213 01:27:47.434229 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:47.434284 kubelet[2886]: E1213 01:27:47.434267 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:48.026404 kubelet[2886]: W1213 01:27:47.575638 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:48.026404 kubelet[2886]: E1213 01:27:47.575678 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:48.026404 kubelet[2886]: E1213 01:27:47.875997 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-d903163327?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:48.026404 kubelet[2886]: E1213 01:27:48.002640 2886 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:48.030544 kubelet[2886]: I1213 01:27:48.030513 2886 policy_none.go:49] "None policy: Start" Dec 13 01:27:48.031261 kubelet[2886]: I1213 01:27:48.031238 2886 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:48.031326 kubelet[2886]: I1213 01:27:48.031279 2886 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:48.037165 kubelet[2886]: W1213 01:27:48.037077 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:48.037165 kubelet[2886]: E1213 01:27:48.037127 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:48.041897 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:27:48.050095 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:27:48.053751 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:27:48.062515 kubelet[2886]: I1213 01:27:48.062347 2886 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:48.062647 kubelet[2886]: I1213 01:27:48.062623 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:48.065111 kubelet[2886]: E1213 01:27:48.065035 2886 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:48.193178 kubelet[2886]: I1213 01:27:48.193100 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:48.193447 kubelet[2886]: E1213 01:27:48.193420 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:48.509809 kubelet[2886]: E1213 01:27:48.509781 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:49.212688 kubelet[2886]: W1213 01:27:49.212650 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:49.212688 kubelet[2886]: E1213 01:27:49.212693 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:49.476393 kubelet[2886]: E1213 01:27:49.476365 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-d903163327?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="3.2s" Dec 13 01:27:49.603164 kubelet[2886]: I1213 01:27:49.603071 2886 topology_manager.go:215] "Topology Admit Handler" podUID="1473af00dfea6427f20b3019abc11dbf" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.604774 kubelet[2886]: I1213 01:27:49.604590 2886 topology_manager.go:215] "Topology Admit Handler" podUID="8be47a696d6afa6dfb9f33dbf6bd8615" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.606050 kubelet[2886]: I1213 01:27:49.606026 2886 topology_manager.go:215] "Topology Admit Handler" podUID="7241a5c2b5093e5488e02e2b2bb778c7" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.613179 systemd[1]: Created slice kubepods-burstable-pod1473af00dfea6427f20b3019abc11dbf.slice - libcontainer container kubepods-burstable-pod1473af00dfea6427f20b3019abc11dbf.slice. Dec 13 01:27:49.623953 systemd[1]: Created slice kubepods-burstable-pod8be47a696d6afa6dfb9f33dbf6bd8615.slice - libcontainer container kubepods-burstable-pod8be47a696d6afa6dfb9f33dbf6bd8615.slice. Dec 13 01:27:49.628850 systemd[1]: Created slice kubepods-burstable-pod7241a5c2b5093e5488e02e2b2bb778c7.slice - libcontainer container kubepods-burstable-pod7241a5c2b5093e5488e02e2b2bb778c7.slice. Dec 13 01:27:49.686791 kubelet[2886]: I1213 01:27:49.686739 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.686919 kubelet[2886]: I1213 01:27:49.686824 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.686919 kubelet[2886]: I1213 01:27:49.686845 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.686919 kubelet[2886]: I1213 01:27:49.686905 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.687019 kubelet[2886]: I1213 01:27:49.686925 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.687019 kubelet[2886]: I1213 01:27:49.686945 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.687062 kubelet[2886]: I1213 01:27:49.687023 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.687062 kubelet[2886]: I1213 01:27:49.687044 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.687109 kubelet[2886]: I1213 01:27:49.687105 2886 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7241a5c2b5093e5488e02e2b2bb778c7-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-d903163327\" (UID: \"7241a5c2b5093e5488e02e2b2bb778c7\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-d903163327" Dec 13 01:27:49.784361 kubelet[2886]: W1213 01:27:49.784274 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:49.784361 kubelet[2886]: E1213 01:27:49.784320 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:49.796091 kubelet[2886]: I1213 01:27:49.796043 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:49.796340 kubelet[2886]: E1213 01:27:49.796320 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:49.922510 containerd[1717]: time="2024-12-13T01:27:49.922416410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-d903163327,Uid:1473af00dfea6427f20b3019abc11dbf,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:49.928171 containerd[1717]: time="2024-12-13T01:27:49.927954039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-d903163327,Uid:8be47a696d6afa6dfb9f33dbf6bd8615,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:49.931278 containerd[1717]: time="2024-12-13T01:27:49.930858233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-d903163327,Uid:7241a5c2b5093e5488e02e2b2bb778c7,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:50.175196 kubelet[2886]: W1213 01:27:50.175102 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:50.175338 kubelet[2886]: E1213 01:27:50.175327 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-d903163327&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:50.521792 kubelet[2886]: W1213 01:27:50.521733 2886 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:50.521792 kubelet[2886]: E1213 01:27:50.521770 2886 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Dec 13 01:27:50.593322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749995276.mount: Deactivated successfully. Dec 13 01:27:50.637551 containerd[1717]: time="2024-12-13T01:27:50.637503212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:50.641844 containerd[1717]: time="2024-12-13T01:27:50.641809123Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:27:50.645550 containerd[1717]: time="2024-12-13T01:27:50.645508676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:50.649016 containerd[1717]: time="2024-12-13T01:27:50.648986589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:50.655464 containerd[1717]: time="2024-12-13T01:27:50.654457619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:50.657698 containerd[1717]: time="2024-12-13T01:27:50.657660132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:50.665456 containerd[1717]: time="2024-12-13T01:27:50.664787478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:50.668295 containerd[1717]: time="2024-12-13T01:27:50.668235672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:50.669127 containerd[1717]: time="2024-12-13T01:27:50.668918670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 746.40594ms" Dec 13 01:27:50.671594 containerd[1717]: time="2024-12-13T01:27:50.671558305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 743.547626ms" Dec 13 01:27:50.677019 containerd[1717]: time="2024-12-13T01:27:50.676976335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 746.073062ms" Dec 13 01:27:51.293660 containerd[1717]: time="2024-12-13T01:27:51.293012825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:51.293660 containerd[1717]: time="2024-12-13T01:27:51.293078025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:51.293660 containerd[1717]: time="2024-12-13T01:27:51.293093385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.293660 containerd[1717]: time="2024-12-13T01:27:51.293182985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.298493 containerd[1717]: time="2024-12-13T01:27:51.298185298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:51.298493 containerd[1717]: time="2024-12-13T01:27:51.298235938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:51.298493 containerd[1717]: time="2024-12-13T01:27:51.298250218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.298627 containerd[1717]: time="2024-12-13T01:27:51.298361578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.299572 containerd[1717]: time="2024-12-13T01:27:51.299287216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:51.299572 containerd[1717]: time="2024-12-13T01:27:51.299338416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:51.299572 containerd[1717]: time="2024-12-13T01:27:51.299353256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.299572 containerd[1717]: time="2024-12-13T01:27:51.299419016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:51.324627 systemd[1]: Started cri-containerd-794f8346183101612d8644a8c1395d988233557b72e819bad49443418fa720ff.scope - libcontainer container 794f8346183101612d8644a8c1395d988233557b72e819bad49443418fa720ff. Dec 13 01:27:51.334590 systemd[1]: Started cri-containerd-d1d6ca073f7211f815eb1ad86f44f0fe533d1db994dabdd58687845540876597.scope - libcontainer container d1d6ca073f7211f815eb1ad86f44f0fe533d1db994dabdd58687845540876597. Dec 13 01:27:51.335589 systemd[1]: Started cri-containerd-df71091e2ffd0b99d763107d02d9cc37da039838cced072e5f4d44bcf113d600.scope - libcontainer container df71091e2ffd0b99d763107d02d9cc37da039838cced072e5f4d44bcf113d600. Dec 13 01:27:51.376881 containerd[1717]: time="2024-12-13T01:27:51.376554623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-d903163327,Uid:1473af00dfea6427f20b3019abc11dbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"df71091e2ffd0b99d763107d02d9cc37da039838cced072e5f4d44bcf113d600\"" Dec 13 01:27:51.385307 containerd[1717]: time="2024-12-13T01:27:51.384952371Z" level=info msg="CreateContainer within sandbox \"df71091e2ffd0b99d763107d02d9cc37da039838cced072e5f4d44bcf113d600\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:51.395140 containerd[1717]: time="2024-12-13T01:27:51.395109756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-d903163327,Uid:8be47a696d6afa6dfb9f33dbf6bd8615,Namespace:kube-system,Attempt:0,} returns sandbox id \"794f8346183101612d8644a8c1395d988233557b72e819bad49443418fa720ff\"" Dec 13 01:27:51.398585 containerd[1717]: time="2024-12-13T01:27:51.398548391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-d903163327,Uid:7241a5c2b5093e5488e02e2b2bb778c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1d6ca073f7211f815eb1ad86f44f0fe533d1db994dabdd58687845540876597\"" Dec 13 01:27:51.400601 containerd[1717]: time="2024-12-13T01:27:51.400575948Z" level=info msg="CreateContainer within sandbox \"794f8346183101612d8644a8c1395d988233557b72e819bad49443418fa720ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:51.403215 containerd[1717]: time="2024-12-13T01:27:51.403184704Z" level=info msg="CreateContainer within sandbox \"d1d6ca073f7211f815eb1ad86f44f0fe533d1db994dabdd58687845540876597\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:51.460233 containerd[1717]: time="2024-12-13T01:27:51.460120221Z" level=info msg="CreateContainer within sandbox \"794f8346183101612d8644a8c1395d988233557b72e819bad49443418fa720ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc92e32178dff14ca8d53d1f7c15618e51acab4cd70f0fe9019bdd894e84b8a7\"" Dec 13 01:27:51.461406 containerd[1717]: time="2024-12-13T01:27:51.461220259Z" level=info msg="StartContainer for \"bc92e32178dff14ca8d53d1f7c15618e51acab4cd70f0fe9019bdd894e84b8a7\"" Dec 13 01:27:51.465222 containerd[1717]: time="2024-12-13T01:27:51.465190733Z" level=info msg="CreateContainer within sandbox \"df71091e2ffd0b99d763107d02d9cc37da039838cced072e5f4d44bcf113d600\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a6372113cd63ce6eb83d20fa78b8cbcb7a4d53657bf868cff80ed29181c619c\"" Dec 13 01:27:51.466837 containerd[1717]: time="2024-12-13T01:27:51.465839572Z" level=info msg="StartContainer for \"0a6372113cd63ce6eb83d20fa78b8cbcb7a4d53657bf868cff80ed29181c619c\"" Dec 13 01:27:51.486682 containerd[1717]: time="2024-12-13T01:27:51.486622862Z" level=info msg="CreateContainer within sandbox \"d1d6ca073f7211f815eb1ad86f44f0fe533d1db994dabdd58687845540876597\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60e65b66ae3a4b7644d28dd044603e362b12710ff803c1e7308ae806d18685bb\"" Dec 13 01:27:51.488343 containerd[1717]: time="2024-12-13T01:27:51.488317579Z" level=info msg="StartContainer for \"60e65b66ae3a4b7644d28dd044603e362b12710ff803c1e7308ae806d18685bb\"" Dec 13 01:27:51.490650 systemd[1]: Started cri-containerd-bc92e32178dff14ca8d53d1f7c15618e51acab4cd70f0fe9019bdd894e84b8a7.scope - libcontainer container bc92e32178dff14ca8d53d1f7c15618e51acab4cd70f0fe9019bdd894e84b8a7. Dec 13 01:27:51.494860 systemd[1]: Started cri-containerd-0a6372113cd63ce6eb83d20fa78b8cbcb7a4d53657bf868cff80ed29181c619c.scope - libcontainer container 0a6372113cd63ce6eb83d20fa78b8cbcb7a4d53657bf868cff80ed29181c619c. Dec 13 01:27:51.517700 systemd[1]: Started cri-containerd-60e65b66ae3a4b7644d28dd044603e362b12710ff803c1e7308ae806d18685bb.scope - libcontainer container 60e65b66ae3a4b7644d28dd044603e362b12710ff803c1e7308ae806d18685bb. Dec 13 01:27:51.559332 containerd[1717]: time="2024-12-13T01:27:51.559237315Z" level=info msg="StartContainer for \"0a6372113cd63ce6eb83d20fa78b8cbcb7a4d53657bf868cff80ed29181c619c\" returns successfully" Dec 13 01:27:51.566670 containerd[1717]: time="2024-12-13T01:27:51.566612345Z" level=info msg="StartContainer for \"bc92e32178dff14ca8d53d1f7c15618e51acab4cd70f0fe9019bdd894e84b8a7\" returns successfully" Dec 13 01:27:51.604399 containerd[1717]: time="2024-12-13T01:27:51.604350529Z" level=info msg="StartContainer for \"60e65b66ae3a4b7644d28dd044603e362b12710ff803c1e7308ae806d18685bb\" returns successfully" Dec 13 01:27:52.998249 kubelet[2886]: I1213 01:27:52.998208 2886 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:53.838632 kubelet[2886]: E1213 01:27:53.838593 2886 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-d903163327\" not found" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:53.916972 kubelet[2886]: I1213 01:27:53.916899 2886 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:53.938844 kubelet[2886]: E1213 01:27:53.938813 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:53.956256 kubelet[2886]: E1213 01:27:53.956104 2886 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-a-d903163327.1810984228c1f0ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-d903163327,UID:ci-4081.2.1-a-d903163327,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-d903163327,},FirstTimestamp:2024-12-13 01:27:46.45670321 +0000 UTC m=+0.885808599,LastTimestamp:2024-12-13 01:27:46.45670321 +0000 UTC m=+0.885808599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-d903163327,}" Dec 13 01:27:54.039924 kubelet[2886]: E1213 01:27:54.039886 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:54.140663 kubelet[2886]: E1213 01:27:54.140329 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:54.241197 kubelet[2886]: E1213 01:27:54.241157 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:54.341724 kubelet[2886]: E1213 01:27:54.341683 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-d903163327\" not found" Dec 13 01:27:54.459382 kubelet[2886]: I1213 01:27:54.459192 2886 apiserver.go:52] "Watching apiserver" Dec 13 01:27:54.469992 kubelet[2886]: I1213 01:27:54.469957 2886 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:54.543109 kubelet[2886]: E1213 01:27:54.543072 2886 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-d903163327\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:54.543736 kubelet[2886]: E1213 01:27:54.543073 2886 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-d903163327\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:56.723918 systemd[1]: Reloading requested from client PID 3158 ('systemctl') (unit session-9.scope)... Dec 13 01:27:56.723932 systemd[1]: Reloading... Dec 13 01:27:56.821461 zram_generator::config[3201]: No configuration found. Dec 13 01:27:56.922557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:57.009335 systemd[1]: Reloading finished in 285 ms. Dec 13 01:27:57.042419 kubelet[2886]: I1213 01:27:57.042225 2886 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:57.042244 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:57.048821 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:57.049040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:57.049084 systemd[1]: kubelet.service: Consumed 1.236s CPU time, 114.3M memory peak, 0B memory swap peak. Dec 13 01:27:57.052745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:57.143906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:57.153827 (kubelet)[3262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:57.209030 kubelet[3262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:57.209030 kubelet[3262]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:57.209030 kubelet[3262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:57.209030 kubelet[3262]: I1213 01:27:57.208921 3262 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:57.215460 kubelet[3262]: I1213 01:27:57.214649 3262 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:57.215460 kubelet[3262]: I1213 01:27:57.214672 3262 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:57.215460 kubelet[3262]: I1213 01:27:57.214826 3262 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:57.216425 kubelet[3262]: I1213 01:27:57.216402 3262 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:57.218691 kubelet[3262]: I1213 01:27:57.218407 3262 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:57.229499 kubelet[3262]: I1213 01:27:57.229472 3262 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:57.229756 kubelet[3262]: I1213 01:27:57.229667 3262 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:57.229952 kubelet[3262]: I1213 01:27:57.229816 3262 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:57.229952 kubelet[3262]: I1213 01:27:57.229842 3262 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:57.229952 kubelet[3262]: I1213 01:27:57.229850 3262 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:57.229952 kubelet[3262]: I1213 01:27:57.229878 3262 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:57.230113 kubelet[3262]: I1213 01:27:57.229973 3262 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:57.230113 kubelet[3262]: I1213 01:27:57.229986 3262 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:57.230113 kubelet[3262]: I1213 01:27:57.230004 3262 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:57.232570 kubelet[3262]: I1213 01:27:57.230017 3262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:57.250012 kubelet[3262]: I1213 01:27:57.248749 3262 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:57.250012 kubelet[3262]: I1213 01:27:57.248936 3262 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:57.250012 kubelet[3262]: I1213 01:27:57.249349 3262 server.go:1256] "Started kubelet" Dec 13 01:27:57.257147 kubelet[3262]: I1213 01:27:57.257112 3262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:57.267491 kubelet[3262]: I1213 01:27:57.267028 3262 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:57.269754 kubelet[3262]: I1213 01:27:57.268710 3262 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:57.269754 kubelet[3262]: I1213 01:27:57.269101 3262 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:57.270196 kubelet[3262]: I1213 01:27:57.270141 3262 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:57.270448 kubelet[3262]: I1213 01:27:57.270413 3262 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:57.271073 kubelet[3262]: I1213 01:27:57.271046 3262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:57.271271 kubelet[3262]: I1213 01:27:57.271252 3262 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:57.273765 kubelet[3262]: E1213 01:27:57.273741 3262 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:57.274451 kubelet[3262]: I1213 01:27:57.274409 3262 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:57.274520 kubelet[3262]: I1213 01:27:57.274498 3262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:57.275414 kubelet[3262]: I1213 01:27:57.275386 3262 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:57.276571 kubelet[3262]: I1213 01:27:57.276549 3262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:57.278610 kubelet[3262]: I1213 01:27:57.278302 3262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:57.278610 kubelet[3262]: I1213 01:27:57.278322 3262 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:57.278610 kubelet[3262]: I1213 01:27:57.278340 3262 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:57.278610 kubelet[3262]: E1213 01:27:57.278386 3262 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:57.315196 kubelet[3262]: I1213 01:27:57.315174 3262 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:57.315196 kubelet[3262]: I1213 01:27:57.315229 3262 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:57.315196 kubelet[3262]: I1213 01:27:57.315249 3262 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:57.315708 kubelet[3262]: I1213 01:27:57.315602 3262 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:57.315708 kubelet[3262]: I1213 01:27:57.315639 3262 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:57.315708 kubelet[3262]: I1213 01:27:57.315647 3262 policy_none.go:49] "None policy: Start" Dec 13 01:27:57.317162 kubelet[3262]: I1213 01:27:57.316661 3262 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:57.317162 kubelet[3262]: I1213 01:27:57.316689 3262 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:57.317162 kubelet[3262]: I1213 01:27:57.316875 3262 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:57.321837 kubelet[3262]: I1213 01:27:57.321811 3262 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:57.322068 kubelet[3262]: I1213 01:27:57.322049 3262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:57.372375 kubelet[3262]: I1213 01:27:57.372323 3262 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:57.379468 kubelet[3262]: I1213 01:27:57.379116 3262 topology_manager.go:215] "Topology Admit Handler" podUID="7241a5c2b5093e5488e02e2b2bb778c7" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.379468 kubelet[3262]: I1213 01:27:57.379196 3262 topology_manager.go:215] "Topology Admit Handler" podUID="1473af00dfea6427f20b3019abc11dbf" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.379468 kubelet[3262]: I1213 01:27:57.379262 3262 topology_manager.go:215] "Topology Admit Handler" podUID="8be47a696d6afa6dfb9f33dbf6bd8615" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.387780 kubelet[3262]: W1213 01:27:57.387763 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:57.392026 kubelet[3262]: W1213 01:27:57.391927 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:57.392406 kubelet[3262]: W1213 01:27:57.392174 3262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:57.392603 kubelet[3262]: I1213 01:27:57.392587 3262 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:57.392839 kubelet[3262]: I1213 01:27:57.392780 3262 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-d903163327" Dec 13 01:27:57.471867 kubelet[3262]: I1213 01:27:57.471838 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472094 kubelet[3262]: I1213 01:27:57.472065 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472209 kubelet[3262]: I1213 01:27:57.472200 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472309 kubelet[3262]: I1213 01:27:57.472301 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472419 kubelet[3262]: I1213 01:27:57.472410 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472550 kubelet[3262]: I1213 01:27:57.472541 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472652 kubelet[3262]: I1213 01:27:57.472644 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8be47a696d6afa6dfb9f33dbf6bd8615-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-d903163327\" (UID: \"8be47a696d6afa6dfb9f33dbf6bd8615\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472747 kubelet[3262]: I1213 01:27:57.472738 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7241a5c2b5093e5488e02e2b2bb778c7-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-d903163327\" (UID: \"7241a5c2b5093e5488e02e2b2bb778c7\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-d903163327" Dec 13 01:27:57.472840 kubelet[3262]: I1213 01:27:57.472832 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1473af00dfea6427f20b3019abc11dbf-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-d903163327\" (UID: \"1473af00dfea6427f20b3019abc11dbf\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" Dec 13 01:27:58.233959 kubelet[3262]: I1213 01:27:58.233766 3262 apiserver.go:52] "Watching apiserver" Dec 13 01:27:58.271251 kubelet[3262]: I1213 01:27:58.271212 3262 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:58.361226 kubelet[3262]: I1213 01:27:58.361126 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-d903163327" podStartSLOduration=1.361073022 podStartE2EDuration="1.361073022s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:58.334682153 +0000 UTC m=+1.176417118" watchObservedRunningTime="2024-12-13 01:27:58.361073022 +0000 UTC m=+1.202808027" Dec 13 01:27:58.376705 kubelet[3262]: I1213 01:27:58.375565 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-d903163327" podStartSLOduration=1.375525354 podStartE2EDuration="1.375525354s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:58.361828341 +0000 UTC m=+1.203563346" watchObservedRunningTime="2024-12-13 01:27:58.375525354 +0000 UTC m=+1.217260359" Dec 13 01:27:58.376866 kubelet[3262]: I1213 01:27:58.376821 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-d903163327" podStartSLOduration=1.376786992 podStartE2EDuration="1.376786992s" podCreationTimestamp="2024-12-13 01:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:58.375847793 +0000 UTC m=+1.217582758" watchObservedRunningTime="2024-12-13 01:27:58.376786992 +0000 UTC m=+1.218522037" Dec 13 01:28:01.864302 sudo[2337]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:01.943561 sshd[2334]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:01.947995 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:34354.service: Deactivated successfully. Dec 13 01:28:01.951090 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:28:01.951288 systemd[1]: session-9.scope: Consumed 7.448s CPU time, 183.8M memory peak, 0B memory swap peak. Dec 13 01:28:01.952373 systemd-logind[1679]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:28:01.953761 systemd-logind[1679]: Removed session 9. Dec 13 01:28:10.474765 kubelet[3262]: I1213 01:28:10.474720 3262 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:28:10.475796 containerd[1717]: time="2024-12-13T01:28:10.475522605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:28:10.476190 kubelet[3262]: I1213 01:28:10.475866 3262 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:28:11.403498 kubelet[3262]: I1213 01:28:11.403120 3262 topology_manager.go:215] "Topology Admit Handler" podUID="71577534-3db4-4121-afed-1f2e8d30b07d" podNamespace="kube-system" podName="kube-proxy-8945w" Dec 13 01:28:11.415608 systemd[1]: Created slice kubepods-besteffort-pod71577534_3db4_4121_afed_1f2e8d30b07d.slice - libcontainer container kubepods-besteffort-pod71577534_3db4_4121_afed_1f2e8d30b07d.slice. Dec 13 01:28:11.450809 kubelet[3262]: I1213 01:28:11.450753 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71577534-3db4-4121-afed-1f2e8d30b07d-xtables-lock\") pod \"kube-proxy-8945w\" (UID: \"71577534-3db4-4121-afed-1f2e8d30b07d\") " pod="kube-system/kube-proxy-8945w" Dec 13 01:28:11.450809 kubelet[3262]: I1213 01:28:11.450808 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71577534-3db4-4121-afed-1f2e8d30b07d-kube-proxy\") pod \"kube-proxy-8945w\" (UID: \"71577534-3db4-4121-afed-1f2e8d30b07d\") " pod="kube-system/kube-proxy-8945w" Dec 13 01:28:11.451063 kubelet[3262]: I1213 01:28:11.450830 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71577534-3db4-4121-afed-1f2e8d30b07d-lib-modules\") pod \"kube-proxy-8945w\" (UID: \"71577534-3db4-4121-afed-1f2e8d30b07d\") " pod="kube-system/kube-proxy-8945w" Dec 13 01:28:11.451063 kubelet[3262]: I1213 01:28:11.450854 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8br\" (UniqueName: \"kubernetes.io/projected/71577534-3db4-4121-afed-1f2e8d30b07d-kube-api-access-lq8br\") pod \"kube-proxy-8945w\" (UID: \"71577534-3db4-4121-afed-1f2e8d30b07d\") " pod="kube-system/kube-proxy-8945w" Dec 13 01:28:11.570988 kubelet[3262]: I1213 01:28:11.570939 3262 topology_manager.go:215] "Topology Admit Handler" podUID="d8175f91-34b5-45f3-84b3-feab1841099b" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-lz48f" Dec 13 01:28:11.579713 systemd[1]: Created slice kubepods-besteffort-podd8175f91_34b5_45f3_84b3_feab1841099b.slice - libcontainer container kubepods-besteffort-podd8175f91_34b5_45f3_84b3_feab1841099b.slice. Dec 13 01:28:11.652421 kubelet[3262]: I1213 01:28:11.652379 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8175f91-34b5-45f3-84b3-feab1841099b-var-lib-calico\") pod \"tigera-operator-c7ccbd65-lz48f\" (UID: \"d8175f91-34b5-45f3-84b3-feab1841099b\") " pod="tigera-operator/tigera-operator-c7ccbd65-lz48f" Dec 13 01:28:11.652571 kubelet[3262]: I1213 01:28:11.652446 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jwk\" (UniqueName: \"kubernetes.io/projected/d8175f91-34b5-45f3-84b3-feab1841099b-kube-api-access-29jwk\") pod \"tigera-operator-c7ccbd65-lz48f\" (UID: \"d8175f91-34b5-45f3-84b3-feab1841099b\") " pod="tigera-operator/tigera-operator-c7ccbd65-lz48f" Dec 13 01:28:11.726233 containerd[1717]: time="2024-12-13T01:28:11.725954654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8945w,Uid:71577534-3db4-4121-afed-1f2e8d30b07d,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:11.773889 containerd[1717]: time="2024-12-13T01:28:11.773620902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:11.773889 containerd[1717]: time="2024-12-13T01:28:11.773673901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:11.773889 containerd[1717]: time="2024-12-13T01:28:11.773693301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.773889 containerd[1717]: time="2024-12-13T01:28:11.773775021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.794598 systemd[1]: Started cri-containerd-5d70ba73f654cc01026a7101ff9e34ac093c00493d02c8e0b92938224d3067dc.scope - libcontainer container 5d70ba73f654cc01026a7101ff9e34ac093c00493d02c8e0b92938224d3067dc. Dec 13 01:28:11.814980 containerd[1717]: time="2024-12-13T01:28:11.814847204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8945w,Uid:71577534-3db4-4121-afed-1f2e8d30b07d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d70ba73f654cc01026a7101ff9e34ac093c00493d02c8e0b92938224d3067dc\"" Dec 13 01:28:11.818990 containerd[1717]: time="2024-12-13T01:28:11.818107477Z" level=info msg="CreateContainer within sandbox \"5d70ba73f654cc01026a7101ff9e34ac093c00493d02c8e0b92938224d3067dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:28:11.863276 containerd[1717]: time="2024-12-13T01:28:11.863150370Z" level=info msg="CreateContainer within sandbox \"5d70ba73f654cc01026a7101ff9e34ac093c00493d02c8e0b92938224d3067dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"629b9fcdb7a9e2235a9f09e240084bba9eb7473d59381bdda7f1359a26a0a5a7\"" Dec 13 01:28:11.864126 containerd[1717]: time="2024-12-13T01:28:11.864054688Z" level=info msg="StartContainer for \"629b9fcdb7a9e2235a9f09e240084bba9eb7473d59381bdda7f1359a26a0a5a7\"" Dec 13 01:28:11.883910 containerd[1717]: time="2024-12-13T01:28:11.883558362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lz48f,Uid:d8175f91-34b5-45f3-84b3-feab1841099b,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:28:11.884606 systemd[1]: Started cri-containerd-629b9fcdb7a9e2235a9f09e240084bba9eb7473d59381bdda7f1359a26a0a5a7.scope - libcontainer container 629b9fcdb7a9e2235a9f09e240084bba9eb7473d59381bdda7f1359a26a0a5a7. Dec 13 01:28:11.915909 containerd[1717]: time="2024-12-13T01:28:11.915865646Z" level=info msg="StartContainer for \"629b9fcdb7a9e2235a9f09e240084bba9eb7473d59381bdda7f1359a26a0a5a7\" returns successfully" Dec 13 01:28:11.945680 containerd[1717]: time="2024-12-13T01:28:11.945393056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:11.945680 containerd[1717]: time="2024-12-13T01:28:11.945470416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:11.945680 containerd[1717]: time="2024-12-13T01:28:11.945486256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.946947 containerd[1717]: time="2024-12-13T01:28:11.945562176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:11.965614 systemd[1]: Started cri-containerd-11654f54b237ac9d71f719b97ee359031d354f8d91f2d917cce4df4cd5339580.scope - libcontainer container 11654f54b237ac9d71f719b97ee359031d354f8d91f2d917cce4df4cd5339580. Dec 13 01:28:11.999352 containerd[1717]: time="2024-12-13T01:28:11.998596531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-lz48f,Uid:d8175f91-34b5-45f3-84b3-feab1841099b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"11654f54b237ac9d71f719b97ee359031d354f8d91f2d917cce4df4cd5339580\"" Dec 13 01:28:12.004224 containerd[1717]: time="2024-12-13T01:28:12.004160958Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:28:14.610031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885205817.mount: Deactivated successfully. Dec 13 01:28:15.083570 containerd[1717]: time="2024-12-13T01:28:15.083508555Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.088453 containerd[1717]: time="2024-12-13T01:28:15.088283306Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125960" Dec 13 01:28:15.092861 containerd[1717]: time="2024-12-13T01:28:15.092815097Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.099262 containerd[1717]: time="2024-12-13T01:28:15.099218044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:15.100393 containerd[1717]: time="2024-12-13T01:28:15.099861843Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.095649445s" Dec 13 01:28:15.100393 containerd[1717]: time="2024-12-13T01:28:15.099896363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:28:15.102343 containerd[1717]: time="2024-12-13T01:28:15.102243958Z" level=info msg="CreateContainer within sandbox \"11654f54b237ac9d71f719b97ee359031d354f8d91f2d917cce4df4cd5339580\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:28:15.149741 containerd[1717]: time="2024-12-13T01:28:15.149693106Z" level=info msg="CreateContainer within sandbox \"11654f54b237ac9d71f719b97ee359031d354f8d91f2d917cce4df4cd5339580\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad75b01c38fdfe1af6550aab74e90faf37f1072c38ec19f7c0eb779c2eeaff19\"" Dec 13 01:28:15.150831 containerd[1717]: time="2024-12-13T01:28:15.150143945Z" level=info msg="StartContainer for \"ad75b01c38fdfe1af6550aab74e90faf37f1072c38ec19f7c0eb779c2eeaff19\"" Dec 13 01:28:15.176570 systemd[1]: Started cri-containerd-ad75b01c38fdfe1af6550aab74e90faf37f1072c38ec19f7c0eb779c2eeaff19.scope - libcontainer container ad75b01c38fdfe1af6550aab74e90faf37f1072c38ec19f7c0eb779c2eeaff19. Dec 13 01:28:15.204147 containerd[1717]: time="2024-12-13T01:28:15.204076642Z" level=info msg="StartContainer for \"ad75b01c38fdfe1af6550aab74e90faf37f1072c38ec19f7c0eb779c2eeaff19\" returns successfully" Dec 13 01:28:15.343468 kubelet[3262]: I1213 01:28:15.342867 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8945w" podStartSLOduration=4.342830856 podStartE2EDuration="4.342830856s" podCreationTimestamp="2024-12-13 01:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:12.340891283 +0000 UTC m=+15.182626288" watchObservedRunningTime="2024-12-13 01:28:15.342830856 +0000 UTC m=+18.184565861" Dec 13 01:28:15.343468 kubelet[3262]: I1213 01:28:15.342983 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-lz48f" podStartSLOduration=1.245422815 podStartE2EDuration="4.342967056s" podCreationTimestamp="2024-12-13 01:28:11 +0000 UTC" firstStartedPulling="2024-12-13 01:28:12.002671001 +0000 UTC m=+14.844406006" lastFinishedPulling="2024-12-13 01:28:15.100215282 +0000 UTC m=+17.941950247" observedRunningTime="2024-12-13 01:28:15.342686737 +0000 UTC m=+18.184421742" watchObservedRunningTime="2024-12-13 01:28:15.342967056 +0000 UTC m=+18.184702061" Dec 13 01:28:18.820772 kubelet[3262]: I1213 01:28:18.820731 3262 topology_manager.go:215] "Topology Admit Handler" podUID="43654127-47c4-49aa-b87c-5d26241140a7" podNamespace="calico-system" podName="calico-typha-69887c55f7-4pkbr" Dec 13 01:28:18.830501 systemd[1]: Created slice kubepods-besteffort-pod43654127_47c4_49aa_b87c_5d26241140a7.slice - libcontainer container kubepods-besteffort-pod43654127_47c4_49aa_b87c_5d26241140a7.slice. Dec 13 01:28:18.891850 kubelet[3262]: I1213 01:28:18.891775 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/43654127-47c4-49aa-b87c-5d26241140a7-typha-certs\") pod \"calico-typha-69887c55f7-4pkbr\" (UID: \"43654127-47c4-49aa-b87c-5d26241140a7\") " pod="calico-system/calico-typha-69887c55f7-4pkbr" Dec 13 01:28:18.891850 kubelet[3262]: I1213 01:28:18.891823 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43654127-47c4-49aa-b87c-5d26241140a7-tigera-ca-bundle\") pod \"calico-typha-69887c55f7-4pkbr\" (UID: \"43654127-47c4-49aa-b87c-5d26241140a7\") " pod="calico-system/calico-typha-69887c55f7-4pkbr" Dec 13 01:28:18.891850 kubelet[3262]: I1213 01:28:18.891885 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9fw\" (UniqueName: \"kubernetes.io/projected/43654127-47c4-49aa-b87c-5d26241140a7-kube-api-access-sp9fw\") pod \"calico-typha-69887c55f7-4pkbr\" (UID: \"43654127-47c4-49aa-b87c-5d26241140a7\") " pod="calico-system/calico-typha-69887c55f7-4pkbr" Dec 13 01:28:18.924465 kubelet[3262]: I1213 01:28:18.921857 3262 topology_manager.go:215] "Topology Admit Handler" podUID="0c3d78a0-7600-4258-af5b-49f048ae3d5a" podNamespace="calico-system" podName="calico-node-l7d6h" Dec 13 01:28:18.932982 systemd[1]: Created slice kubepods-besteffort-pod0c3d78a0_7600_4258_af5b_49f048ae3d5a.slice - libcontainer container kubepods-besteffort-pod0c3d78a0_7600_4258_af5b_49f048ae3d5a.slice. Dec 13 01:28:18.992137 kubelet[3262]: I1213 01:28:18.992098 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-xtables-lock\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992137 kubelet[3262]: I1213 01:28:18.992143 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c3d78a0-7600-4258-af5b-49f048ae3d5a-node-certs\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992305 kubelet[3262]: I1213 01:28:18.992163 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-cni-net-dir\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992305 kubelet[3262]: I1213 01:28:18.992185 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c3d78a0-7600-4258-af5b-49f048ae3d5a-tigera-ca-bundle\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992305 kubelet[3262]: I1213 01:28:18.992225 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-policysync\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992305 kubelet[3262]: I1213 01:28:18.992244 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-var-run-calico\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992305 kubelet[3262]: I1213 01:28:18.992264 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-var-lib-calico\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992411 kubelet[3262]: I1213 01:28:18.992288 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6t5\" (UniqueName: \"kubernetes.io/projected/0c3d78a0-7600-4258-af5b-49f048ae3d5a-kube-api-access-fm6t5\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992411 kubelet[3262]: I1213 01:28:18.992307 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-flexvol-driver-host\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992411 kubelet[3262]: I1213 01:28:18.992326 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-lib-modules\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992411 kubelet[3262]: I1213 01:28:18.992343 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-cni-bin-dir\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:18.992411 kubelet[3262]: I1213 01:28:18.992361 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c3d78a0-7600-4258-af5b-49f048ae3d5a-cni-log-dir\") pod \"calico-node-l7d6h\" (UID: \"0c3d78a0-7600-4258-af5b-49f048ae3d5a\") " pod="calico-system/calico-node-l7d6h" Dec 13 01:28:19.057738 kubelet[3262]: I1213 01:28:19.057205 3262 topology_manager.go:215] "Topology Admit Handler" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" podNamespace="calico-system" podName="csi-node-driver-996pm" Dec 13 01:28:19.058843 kubelet[3262]: E1213 01:28:19.057575 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:19.093593 kubelet[3262]: I1213 01:28:19.093484 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa9deb93-de89-47ca-88fa-e0139fd8400e-registration-dir\") pod \"csi-node-driver-996pm\" (UID: \"fa9deb93-de89-47ca-88fa-e0139fd8400e\") " pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:19.093593 kubelet[3262]: I1213 01:28:19.093574 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa9deb93-de89-47ca-88fa-e0139fd8400e-kubelet-dir\") pod \"csi-node-driver-996pm\" (UID: \"fa9deb93-de89-47ca-88fa-e0139fd8400e\") " pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:19.093730 kubelet[3262]: I1213 01:28:19.093610 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa9deb93-de89-47ca-88fa-e0139fd8400e-socket-dir\") pod \"csi-node-driver-996pm\" (UID: \"fa9deb93-de89-47ca-88fa-e0139fd8400e\") " pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:19.093730 kubelet[3262]: I1213 01:28:19.093636 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m9c7\" (UniqueName: \"kubernetes.io/projected/fa9deb93-de89-47ca-88fa-e0139fd8400e-kube-api-access-9m9c7\") pod \"csi-node-driver-996pm\" (UID: \"fa9deb93-de89-47ca-88fa-e0139fd8400e\") " pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:19.093730 kubelet[3262]: I1213 01:28:19.093655 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fa9deb93-de89-47ca-88fa-e0139fd8400e-varrun\") pod \"csi-node-driver-996pm\" (UID: \"fa9deb93-de89-47ca-88fa-e0139fd8400e\") " pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:19.096005 kubelet[3262]: E1213 01:28:19.095714 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.096005 kubelet[3262]: W1213 01:28:19.095732 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.096005 kubelet[3262]: E1213 01:28:19.095768 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.096005 kubelet[3262]: E1213 01:28:19.095924 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.096005 kubelet[3262]: W1213 01:28:19.095932 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.096005 kubelet[3262]: E1213 01:28:19.095943 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.097999 kubelet[3262]: E1213 01:28:19.097853 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.097999 kubelet[3262]: W1213 01:28:19.097868 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.097999 kubelet[3262]: E1213 01:28:19.097898 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.099922 kubelet[3262]: E1213 01:28:19.099865 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.099922 kubelet[3262]: W1213 01:28:19.099882 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.100010 kubelet[3262]: E1213 01:28:19.099929 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.101616 kubelet[3262]: E1213 01:28:19.101483 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.101616 kubelet[3262]: W1213 01:28:19.101500 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.102366 kubelet[3262]: E1213 01:28:19.102319 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.102619 kubelet[3262]: E1213 01:28:19.102478 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.102619 kubelet[3262]: W1213 01:28:19.102490 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.102619 kubelet[3262]: E1213 01:28:19.102528 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.102895 kubelet[3262]: E1213 01:28:19.102779 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.102895 kubelet[3262]: W1213 01:28:19.102790 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.102895 kubelet[3262]: E1213 01:28:19.102837 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.104681 kubelet[3262]: E1213 01:28:19.104330 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.104681 kubelet[3262]: W1213 01:28:19.104346 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.104681 kubelet[3262]: E1213 01:28:19.104410 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.104960 kubelet[3262]: E1213 01:28:19.104888 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.104960 kubelet[3262]: W1213 01:28:19.104900 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.105038 kubelet[3262]: E1213 01:28:19.104960 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.106161 kubelet[3262]: E1213 01:28:19.105478 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.106161 kubelet[3262]: W1213 01:28:19.105494 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.106161 kubelet[3262]: E1213 01:28:19.105564 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.106563 kubelet[3262]: E1213 01:28:19.106461 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.106563 kubelet[3262]: W1213 01:28:19.106474 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.106563 kubelet[3262]: E1213 01:28:19.106517 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.106776 kubelet[3262]: E1213 01:28:19.106739 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.106776 kubelet[3262]: W1213 01:28:19.106750 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.106885 kubelet[3262]: E1213 01:28:19.106847 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.107067 kubelet[3262]: E1213 01:28:19.107047 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.107067 kubelet[3262]: W1213 01:28:19.107062 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.107288 kubelet[3262]: E1213 01:28:19.107082 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.107288 kubelet[3262]: E1213 01:28:19.107282 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.107348 kubelet[3262]: W1213 01:28:19.107291 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.107348 kubelet[3262]: E1213 01:28:19.107309 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.107474 kubelet[3262]: E1213 01:28:19.107454 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.107474 kubelet[3262]: W1213 01:28:19.107468 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.107605 kubelet[3262]: E1213 01:28:19.107479 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.108034 kubelet[3262]: E1213 01:28:19.107939 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.108034 kubelet[3262]: W1213 01:28:19.107955 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.108034 kubelet[3262]: E1213 01:28:19.107989 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.108658 kubelet[3262]: E1213 01:28:19.108557 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.108658 kubelet[3262]: W1213 01:28:19.108571 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.108658 kubelet[3262]: E1213 01:28:19.108616 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.109323 kubelet[3262]: E1213 01:28:19.109204 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.109323 kubelet[3262]: W1213 01:28:19.109216 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.109323 kubelet[3262]: E1213 01:28:19.109260 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.109762 kubelet[3262]: E1213 01:28:19.109688 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.109762 kubelet[3262]: W1213 01:28:19.109712 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.109762 kubelet[3262]: E1213 01:28:19.109755 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.110329 kubelet[3262]: E1213 01:28:19.110314 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.110463 kubelet[3262]: W1213 01:28:19.110400 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.110579 kubelet[3262]: E1213 01:28:19.110528 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.111829 kubelet[3262]: E1213 01:28:19.111306 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.111829 kubelet[3262]: W1213 01:28:19.111325 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.111829 kubelet[3262]: E1213 01:28:19.111378 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.112157 kubelet[3262]: E1213 01:28:19.112131 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.112252 kubelet[3262]: W1213 01:28:19.112231 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.112349 kubelet[3262]: E1213 01:28:19.112334 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.122917 kubelet[3262]: E1213 01:28:19.122883 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.122917 kubelet[3262]: W1213 01:28:19.122904 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.122917 kubelet[3262]: E1213 01:28:19.122923 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.136495 containerd[1717]: time="2024-12-13T01:28:19.136458956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69887c55f7-4pkbr,Uid:43654127-47c4-49aa-b87c-5d26241140a7,Namespace:calico-system,Attempt:0,}" Dec 13 01:28:19.186821 containerd[1717]: time="2024-12-13T01:28:19.186678660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:19.186821 containerd[1717]: time="2024-12-13T01:28:19.186739700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:19.186821 containerd[1717]: time="2024-12-13T01:28:19.186771060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:19.187195 containerd[1717]: time="2024-12-13T01:28:19.186986019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:19.195726 kubelet[3262]: E1213 01:28:19.195521 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.195726 kubelet[3262]: W1213 01:28:19.195546 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.195726 kubelet[3262]: E1213 01:28:19.195575 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.196018 kubelet[3262]: E1213 01:28:19.195939 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.196018 kubelet[3262]: W1213 01:28:19.195951 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.196018 kubelet[3262]: E1213 01:28:19.195973 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.196573 kubelet[3262]: E1213 01:28:19.196525 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.196573 kubelet[3262]: W1213 01:28:19.196544 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.196573 kubelet[3262]: E1213 01:28:19.196567 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.197269 kubelet[3262]: E1213 01:28:19.196779 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.197269 kubelet[3262]: W1213 01:28:19.196788 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.197269 kubelet[3262]: E1213 01:28:19.196806 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.197269 kubelet[3262]: E1213 01:28:19.197051 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.197269 kubelet[3262]: W1213 01:28:19.197060 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.197269 kubelet[3262]: E1213 01:28:19.197077 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.197913 kubelet[3262]: E1213 01:28:19.197465 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.197913 kubelet[3262]: W1213 01:28:19.197479 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.197913 kubelet[3262]: E1213 01:28:19.197501 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.198056 kubelet[3262]: E1213 01:28:19.197984 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.198056 kubelet[3262]: W1213 01:28:19.197995 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.198056 kubelet[3262]: E1213 01:28:19.198013 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.198302 kubelet[3262]: E1213 01:28:19.198284 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.198302 kubelet[3262]: W1213 01:28:19.198298 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.198446 kubelet[3262]: E1213 01:28:19.198402 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.199881 kubelet[3262]: E1213 01:28:19.199857 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.199881 kubelet[3262]: W1213 01:28:19.199875 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.201008 kubelet[3262]: E1213 01:28:19.200980 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.201008 kubelet[3262]: W1213 01:28:19.201001 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.201586 kubelet[3262]: E1213 01:28:19.201491 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.201586 kubelet[3262]: W1213 01:28:19.201509 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.201916 kubelet[3262]: E1213 01:28:19.201890 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.201916 kubelet[3262]: W1213 01:28:19.201907 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.201916 kubelet[3262]: E1213 01:28:19.201921 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202494 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.202872 kubelet[3262]: W1213 01:28:19.202510 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202524 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202552 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202564 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202739 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.202872 kubelet[3262]: W1213 01:28:19.202747 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.202872 kubelet[3262]: E1213 01:28:19.202757 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203107 kubelet[3262]: E1213 01:28:19.202947 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203107 kubelet[3262]: W1213 01:28:19.202954 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203107 kubelet[3262]: E1213 01:28:19.202965 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203107 kubelet[3262]: E1213 01:28:19.203083 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203107 kubelet[3262]: W1213 01:28:19.203089 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203107 kubelet[3262]: E1213 01:28:19.203098 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203289 kubelet[3262]: E1213 01:28:19.203220 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203289 kubelet[3262]: W1213 01:28:19.203227 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203289 kubelet[3262]: E1213 01:28:19.203236 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203383 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203947 kubelet[3262]: W1213 01:28:19.203397 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203407 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203424 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203668 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203947 kubelet[3262]: W1213 01:28:19.203677 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203688 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203805 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.203947 kubelet[3262]: W1213 01:28:19.203811 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.203947 kubelet[3262]: E1213 01:28:19.203821 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.204657 kubelet[3262]: E1213 01:28:19.204284 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.204657 kubelet[3262]: W1213 01:28:19.204296 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.204657 kubelet[3262]: E1213 01:28:19.204309 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.204840 kubelet[3262]: E1213 01:28:19.204828 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.204920 kubelet[3262]: W1213 01:28:19.204909 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.204974 kubelet[3262]: E1213 01:28:19.204966 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.205167 kubelet[3262]: E1213 01:28:19.205156 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.205265 kubelet[3262]: W1213 01:28:19.205253 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.205346 kubelet[3262]: E1213 01:28:19.205337 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.205632 kubelet[3262]: E1213 01:28:19.205591 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.205711 kubelet[3262]: W1213 01:28:19.205700 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.205875 kubelet[3262]: E1213 01:28:19.205786 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.205879 systemd[1]: Started cri-containerd-05fa3c5719e2e1caffe8b7debabf7fc9fc49d37459eb85f94f6329eb9a956f1e.scope - libcontainer container 05fa3c5719e2e1caffe8b7debabf7fc9fc49d37459eb85f94f6329eb9a956f1e. Dec 13 01:28:19.206308 kubelet[3262]: E1213 01:28:19.206295 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.206308 kubelet[3262]: W1213 01:28:19.206340 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.206308 kubelet[3262]: E1213 01:28:19.206356 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.220259 kubelet[3262]: E1213 01:28:19.220179 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:19.220259 kubelet[3262]: W1213 01:28:19.220211 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:19.220259 kubelet[3262]: E1213 01:28:19.220230 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:19.238702 containerd[1717]: time="2024-12-13T01:28:19.238347161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7d6h,Uid:0c3d78a0-7600-4258-af5b-49f048ae3d5a,Namespace:calico-system,Attempt:0,}" Dec 13 01:28:19.254064 containerd[1717]: time="2024-12-13T01:28:19.254008331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69887c55f7-4pkbr,Uid:43654127-47c4-49aa-b87c-5d26241140a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"05fa3c5719e2e1caffe8b7debabf7fc9fc49d37459eb85f94f6329eb9a956f1e\"" Dec 13 01:28:19.258113 containerd[1717]: time="2024-12-13T01:28:19.257510964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:28:19.292843 containerd[1717]: time="2024-12-13T01:28:19.292121978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:19.292843 containerd[1717]: time="2024-12-13T01:28:19.292209938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:19.292843 containerd[1717]: time="2024-12-13T01:28:19.292228138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:19.292843 containerd[1717]: time="2024-12-13T01:28:19.292358138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:19.311748 systemd[1]: Started cri-containerd-f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b.scope - libcontainer container f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b. Dec 13 01:28:19.339528 containerd[1717]: time="2024-12-13T01:28:19.339114328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l7d6h,Uid:0c3d78a0-7600-4258-af5b-49f048ae3d5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\"" Dec 13 01:28:20.518049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415155708.mount: Deactivated successfully. Dec 13 01:28:21.080466 containerd[1717]: time="2024-12-13T01:28:21.080321076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:21.083042 containerd[1717]: time="2024-12-13T01:28:21.082988751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:28:21.085969 containerd[1717]: time="2024-12-13T01:28:21.085924785Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:21.090625 containerd[1717]: time="2024-12-13T01:28:21.090581136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:21.091273 containerd[1717]: time="2024-12-13T01:28:21.091135335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.833165691s" Dec 13 01:28:21.091273 containerd[1717]: time="2024-12-13T01:28:21.091166495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:28:21.092733 containerd[1717]: time="2024-12-13T01:28:21.092191133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:28:21.103723 containerd[1717]: time="2024-12-13T01:28:21.103680711Z" level=info msg="CreateContainer within sandbox \"05fa3c5719e2e1caffe8b7debabf7fc9fc49d37459eb85f94f6329eb9a956f1e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:28:21.144105 containerd[1717]: time="2024-12-13T01:28:21.144058634Z" level=info msg="CreateContainer within sandbox \"05fa3c5719e2e1caffe8b7debabf7fc9fc49d37459eb85f94f6329eb9a956f1e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8a5f82368e5218fb365694af1cd4b5c0b3c7aedbdd7a33a459411021907e9214\"" Dec 13 01:28:21.144855 containerd[1717]: time="2024-12-13T01:28:21.144826792Z" level=info msg="StartContainer for \"8a5f82368e5218fb365694af1cd4b5c0b3c7aedbdd7a33a459411021907e9214\"" Dec 13 01:28:21.173572 systemd[1]: Started cri-containerd-8a5f82368e5218fb365694af1cd4b5c0b3c7aedbdd7a33a459411021907e9214.scope - libcontainer container 8a5f82368e5218fb365694af1cd4b5c0b3c7aedbdd7a33a459411021907e9214. Dec 13 01:28:21.207856 containerd[1717]: time="2024-12-13T01:28:21.207803752Z" level=info msg="StartContainer for \"8a5f82368e5218fb365694af1cd4b5c0b3c7aedbdd7a33a459411021907e9214\" returns successfully" Dec 13 01:28:21.280461 kubelet[3262]: E1213 01:28:21.278977 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:21.367485 kubelet[3262]: I1213 01:28:21.366886 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-69887c55f7-4pkbr" podStartSLOduration=1.53172156 podStartE2EDuration="3.366724008s" podCreationTimestamp="2024-12-13 01:28:18 +0000 UTC" firstStartedPulling="2024-12-13 01:28:19.256538846 +0000 UTC m=+22.098273851" lastFinishedPulling="2024-12-13 01:28:21.091541294 +0000 UTC m=+23.933276299" observedRunningTime="2024-12-13 01:28:21.36561957 +0000 UTC m=+24.207354575" watchObservedRunningTime="2024-12-13 01:28:21.366724008 +0000 UTC m=+24.208459013" Dec 13 01:28:21.401767 kubelet[3262]: E1213 01:28:21.401738 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.401767 kubelet[3262]: W1213 01:28:21.401757 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.402131 kubelet[3262]: E1213 01:28:21.401776 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.402914 kubelet[3262]: E1213 01:28:21.402736 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.402914 kubelet[3262]: W1213 01:28:21.402753 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.402914 kubelet[3262]: E1213 01:28:21.402778 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.403276 kubelet[3262]: E1213 01:28:21.403061 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.403276 kubelet[3262]: W1213 01:28:21.403071 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.403276 kubelet[3262]: E1213 01:28:21.403083 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.403774 kubelet[3262]: E1213 01:28:21.403589 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.403774 kubelet[3262]: W1213 01:28:21.403615 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.403774 kubelet[3262]: E1213 01:28:21.403629 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404024 kubelet[3262]: E1213 01:28:21.403919 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404024 kubelet[3262]: W1213 01:28:21.403932 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404024 kubelet[3262]: E1213 01:28:21.403956 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404302 kubelet[3262]: E1213 01:28:21.404093 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404302 kubelet[3262]: W1213 01:28:21.404110 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404302 kubelet[3262]: E1213 01:28:21.404121 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404302 kubelet[3262]: E1213 01:28:21.404248 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404302 kubelet[3262]: W1213 01:28:21.404267 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404302 kubelet[3262]: E1213 01:28:21.404278 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404417 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404866 kubelet[3262]: W1213 01:28:21.404425 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404452 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404597 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404866 kubelet[3262]: W1213 01:28:21.404605 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404615 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404750 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.404866 kubelet[3262]: W1213 01:28:21.404757 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.404866 kubelet[3262]: E1213 01:28:21.404766 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.405093 kubelet[3262]: E1213 01:28:21.405004 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.405093 kubelet[3262]: W1213 01:28:21.405013 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.405093 kubelet[3262]: E1213 01:28:21.405024 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.405654 kubelet[3262]: E1213 01:28:21.405302 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.405654 kubelet[3262]: W1213 01:28:21.405318 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.405654 kubelet[3262]: E1213 01:28:21.405331 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.405850 kubelet[3262]: E1213 01:28:21.405720 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.405850 kubelet[3262]: W1213 01:28:21.405730 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.405890 kubelet[3262]: E1213 01:28:21.405857 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.406279 kubelet[3262]: E1213 01:28:21.406223 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.406279 kubelet[3262]: W1213 01:28:21.406236 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.406279 kubelet[3262]: E1213 01:28:21.406267 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.406667 kubelet[3262]: E1213 01:28:21.406569 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.406667 kubelet[3262]: W1213 01:28:21.406583 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.406667 kubelet[3262]: E1213 01:28:21.406596 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.419054 kubelet[3262]: E1213 01:28:21.419031 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.419054 kubelet[3262]: W1213 01:28:21.419049 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.419148 kubelet[3262]: E1213 01:28:21.419064 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.419307 kubelet[3262]: E1213 01:28:21.419290 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.419342 kubelet[3262]: W1213 01:28:21.419312 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.419342 kubelet[3262]: E1213 01:28:21.419333 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.419563 kubelet[3262]: E1213 01:28:21.419546 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.419563 kubelet[3262]: W1213 01:28:21.419559 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.419635 kubelet[3262]: E1213 01:28:21.419575 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.419807 kubelet[3262]: E1213 01:28:21.419792 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.419807 kubelet[3262]: W1213 01:28:21.419805 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.419865 kubelet[3262]: E1213 01:28:21.419823 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.420009 kubelet[3262]: E1213 01:28:21.419988 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.420009 kubelet[3262]: W1213 01:28:21.420005 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.420070 kubelet[3262]: E1213 01:28:21.420021 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.420183 kubelet[3262]: E1213 01:28:21.420170 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.420183 kubelet[3262]: W1213 01:28:21.420177 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.420225 kubelet[3262]: E1213 01:28:21.420187 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.420342 kubelet[3262]: E1213 01:28:21.420330 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.420342 kubelet[3262]: W1213 01:28:21.420341 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.420396 kubelet[3262]: E1213 01:28:21.420358 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.420610 kubelet[3262]: E1213 01:28:21.420595 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.420610 kubelet[3262]: W1213 01:28:21.420608 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.420677 kubelet[3262]: E1213 01:28:21.420632 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.420944 kubelet[3262]: E1213 01:28:21.420927 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.420944 kubelet[3262]: W1213 01:28:21.420941 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.421015 kubelet[3262]: E1213 01:28:21.420957 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.421157 kubelet[3262]: E1213 01:28:21.421143 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.421157 kubelet[3262]: W1213 01:28:21.421156 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.421231 kubelet[3262]: E1213 01:28:21.421219 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.421345 kubelet[3262]: E1213 01:28:21.421331 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.421345 kubelet[3262]: W1213 01:28:21.421342 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.421466 kubelet[3262]: E1213 01:28:21.421418 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.421536 kubelet[3262]: E1213 01:28:21.421521 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.421536 kubelet[3262]: W1213 01:28:21.421535 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.421599 kubelet[3262]: E1213 01:28:21.421550 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.421736 kubelet[3262]: E1213 01:28:21.421719 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.421736 kubelet[3262]: W1213 01:28:21.421732 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.421786 kubelet[3262]: E1213 01:28:21.421747 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.421937 kubelet[3262]: E1213 01:28:21.421919 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.421937 kubelet[3262]: W1213 01:28:21.421932 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.422004 kubelet[3262]: E1213 01:28:21.421946 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.422234 kubelet[3262]: E1213 01:28:21.422218 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.422234 kubelet[3262]: W1213 01:28:21.422232 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.422304 kubelet[3262]: E1213 01:28:21.422251 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.422452 kubelet[3262]: E1213 01:28:21.422416 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.422452 kubelet[3262]: W1213 01:28:21.422450 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.422519 kubelet[3262]: E1213 01:28:21.422462 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.422646 kubelet[3262]: E1213 01:28:21.422633 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.422646 kubelet[3262]: W1213 01:28:21.422645 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.422704 kubelet[3262]: E1213 01:28:21.422656 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:21.423061 kubelet[3262]: E1213 01:28:21.423045 3262 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:28:21.423061 kubelet[3262]: W1213 01:28:21.423059 3262 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:28:21.423123 kubelet[3262]: E1213 01:28:21.423071 3262 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:28:22.223252 containerd[1717]: time="2024-12-13T01:28:22.223202009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:22.225825 containerd[1717]: time="2024-12-13T01:28:22.225788004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:28:22.229529 containerd[1717]: time="2024-12-13T01:28:22.229501477Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:22.233933 containerd[1717]: time="2024-12-13T01:28:22.233885228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:22.234754 containerd[1717]: time="2024-12-13T01:28:22.234682387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.142454294s" Dec 13 01:28:22.234754 containerd[1717]: time="2024-12-13T01:28:22.234719707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:28:22.237635 containerd[1717]: time="2024-12-13T01:28:22.237542581Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:28:22.286824 containerd[1717]: time="2024-12-13T01:28:22.286741527Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93\"" Dec 13 01:28:22.288416 containerd[1717]: time="2024-12-13T01:28:22.287385886Z" level=info msg="StartContainer for \"a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93\"" Dec 13 01:28:22.326586 systemd[1]: Started cri-containerd-a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93.scope - libcontainer container a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93. Dec 13 01:28:22.364909 containerd[1717]: time="2024-12-13T01:28:22.364869658Z" level=info msg="StartContainer for \"a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93\" returns successfully" Dec 13 01:28:22.368030 kubelet[3262]: I1213 01:28:22.367889 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:22.378081 systemd[1]: cri-containerd-a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93.scope: Deactivated successfully. Dec 13 01:28:22.398191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93-rootfs.mount: Deactivated successfully. Dec 13 01:28:23.260808 containerd[1717]: time="2024-12-13T01:28:23.260590747Z" level=info msg="shim disconnected" id=a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93 namespace=k8s.io Dec 13 01:28:23.260808 containerd[1717]: time="2024-12-13T01:28:23.260761667Z" level=warning msg="cleaning up after shim disconnected" id=a38c443dc7188f1ee8ef1bdc60e32805299e7852e483906cea6cf941f6c10e93 namespace=k8s.io Dec 13 01:28:23.260808 containerd[1717]: time="2024-12-13T01:28:23.260773227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:23.279646 kubelet[3262]: E1213 01:28:23.279369 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:23.373649 containerd[1717]: time="2024-12-13T01:28:23.373609617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:28:25.279045 kubelet[3262]: E1213 01:28:25.279001 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:26.118480 containerd[1717]: time="2024-12-13T01:28:26.117801509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:26.120466 containerd[1717]: time="2024-12-13T01:28:26.120249583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:28:26.124263 containerd[1717]: time="2024-12-13T01:28:26.124207974Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:26.131914 containerd[1717]: time="2024-12-13T01:28:26.131858077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:26.132709 containerd[1717]: time="2024-12-13T01:28:26.132592556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.758942459s" Dec 13 01:28:26.132709 containerd[1717]: time="2024-12-13T01:28:26.132625876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:28:26.138499 containerd[1717]: time="2024-12-13T01:28:26.136966586Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:28:26.183782 containerd[1717]: time="2024-12-13T01:28:26.183731083Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5\"" Dec 13 01:28:26.184448 containerd[1717]: time="2024-12-13T01:28:26.184407521Z" level=info msg="StartContainer for \"5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5\"" Dec 13 01:28:26.217613 systemd[1]: Started cri-containerd-5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5.scope - libcontainer container 5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5. Dec 13 01:28:26.245593 containerd[1717]: time="2024-12-13T01:28:26.245291187Z" level=info msg="StartContainer for \"5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5\" returns successfully" Dec 13 01:28:27.280457 kubelet[3262]: E1213 01:28:27.280042 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:27.326052 containerd[1717]: time="2024-12-13T01:28:27.326005317Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:28:27.328578 systemd[1]: cri-containerd-5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5.scope: Deactivated successfully. Dec 13 01:28:27.351844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5-rootfs.mount: Deactivated successfully. Dec 13 01:28:27.395386 kubelet[3262]: I1213 01:28:27.395269 3262 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.426816 3262 topology_manager.go:215] "Topology Admit Handler" podUID="d3416b50-96e6-4c99-89f2-df38b369aa49" podNamespace="kube-system" podName="coredns-76f75df574-kf9dd" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.440520 3262 topology_manager.go:215] "Topology Admit Handler" podUID="365724d2-08aa-4224-b178-802ca3c1363c" podNamespace="calico-system" podName="calico-kube-controllers-6d57c6fb5b-qv9gg" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.440690 3262 topology_manager.go:215] "Topology Admit Handler" podUID="2d905420-9c02-456f-8155-0c05b7bba211" podNamespace="kube-system" podName="coredns-76f75df574-kb9pl" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.440938 3262 topology_manager.go:215] "Topology Admit Handler" podUID="2b3430af-6f3d-4057-8b66-f5f006481739" podNamespace="calico-apiserver" podName="calico-apiserver-6b77b9bd95-dxdjg" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.441361 3262 topology_manager.go:215] "Topology Admit Handler" podUID="8b64c234-9ef4-4520-bc58-c5c9910e2b79" podNamespace="calico-apiserver" podName="calico-apiserver-6b77b9bd95-v8rbn" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.455633 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b64c234-9ef4-4520-bc58-c5c9910e2b79-calico-apiserver-certs\") pod \"calico-apiserver-6b77b9bd95-v8rbn\" (UID: \"8b64c234-9ef4-4520-bc58-c5c9910e2b79\") " pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" Dec 13 01:28:27.639701 kubelet[3262]: I1213 01:28:27.455678 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b3430af-6f3d-4057-8b66-f5f006481739-calico-apiserver-certs\") pod \"calico-apiserver-6b77b9bd95-dxdjg\" (UID: \"2b3430af-6f3d-4057-8b66-f5f006481739\") " pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" Dec 13 01:28:27.436639 systemd[1]: Created slice kubepods-burstable-podd3416b50_96e6_4c99_89f2_df38b369aa49.slice - libcontainer container kubepods-burstable-podd3416b50_96e6_4c99_89f2_df38b369aa49.slice. Dec 13 01:28:27.640245 kubelet[3262]: I1213 01:28:27.455771 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz45m\" (UniqueName: \"kubernetes.io/projected/d3416b50-96e6-4c99-89f2-df38b369aa49-kube-api-access-gz45m\") pod \"coredns-76f75df574-kf9dd\" (UID: \"d3416b50-96e6-4c99-89f2-df38b369aa49\") " pod="kube-system/coredns-76f75df574-kf9dd" Dec 13 01:28:27.640245 kubelet[3262]: I1213 01:28:27.455801 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlw8w\" (UniqueName: \"kubernetes.io/projected/8b64c234-9ef4-4520-bc58-c5c9910e2b79-kube-api-access-mlw8w\") pod \"calico-apiserver-6b77b9bd95-v8rbn\" (UID: \"8b64c234-9ef4-4520-bc58-c5c9910e2b79\") " pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" Dec 13 01:28:27.640245 kubelet[3262]: I1213 01:28:27.455826 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3416b50-96e6-4c99-89f2-df38b369aa49-config-volume\") pod \"coredns-76f75df574-kf9dd\" (UID: \"d3416b50-96e6-4c99-89f2-df38b369aa49\") " pod="kube-system/coredns-76f75df574-kf9dd" Dec 13 01:28:27.640245 kubelet[3262]: I1213 01:28:27.455848 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jpfj\" (UniqueName: \"kubernetes.io/projected/2b3430af-6f3d-4057-8b66-f5f006481739-kube-api-access-9jpfj\") pod \"calico-apiserver-6b77b9bd95-dxdjg\" (UID: \"2b3430af-6f3d-4057-8b66-f5f006481739\") " pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" Dec 13 01:28:27.640245 kubelet[3262]: I1213 01:28:27.455881 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzlv6\" (UniqueName: \"kubernetes.io/projected/2d905420-9c02-456f-8155-0c05b7bba211-kube-api-access-pzlv6\") pod \"coredns-76f75df574-kb9pl\" (UID: \"2d905420-9c02-456f-8155-0c05b7bba211\") " pod="kube-system/coredns-76f75df574-kb9pl" Dec 13 01:28:27.448240 systemd[1]: Created slice kubepods-burstable-pod2d905420_9c02_456f_8155_0c05b7bba211.slice - libcontainer container kubepods-burstable-pod2d905420_9c02_456f_8155_0c05b7bba211.slice. Dec 13 01:28:27.640397 kubelet[3262]: I1213 01:28:27.455902 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d905420-9c02-456f-8155-0c05b7bba211-config-volume\") pod \"coredns-76f75df574-kb9pl\" (UID: \"2d905420-9c02-456f-8155-0c05b7bba211\") " pod="kube-system/coredns-76f75df574-kb9pl" Dec 13 01:28:27.640397 kubelet[3262]: I1213 01:28:27.455922 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/365724d2-08aa-4224-b178-802ca3c1363c-tigera-ca-bundle\") pod \"calico-kube-controllers-6d57c6fb5b-qv9gg\" (UID: \"365724d2-08aa-4224-b178-802ca3c1363c\") " pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" Dec 13 01:28:27.640397 kubelet[3262]: I1213 01:28:27.455972 3262 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4pd\" (UniqueName: \"kubernetes.io/projected/365724d2-08aa-4224-b178-802ca3c1363c-kube-api-access-gf4pd\") pod \"calico-kube-controllers-6d57c6fb5b-qv9gg\" (UID: \"365724d2-08aa-4224-b178-802ca3c1363c\") " pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" Dec 13 01:28:27.458388 systemd[1]: Created slice kubepods-besteffort-pod365724d2_08aa_4224_b178_802ca3c1363c.slice - libcontainer container kubepods-besteffort-pod365724d2_08aa_4224_b178_802ca3c1363c.slice. Dec 13 01:28:27.466322 systemd[1]: Created slice kubepods-besteffort-pod2b3430af_6f3d_4057_8b66_f5f006481739.slice - libcontainer container kubepods-besteffort-pod2b3430af_6f3d_4057_8b66_f5f006481739.slice. Dec 13 01:28:27.473213 systemd[1]: Created slice kubepods-besteffort-pod8b64c234_9ef4_4520_bc58_c5c9910e2b79.slice - libcontainer container kubepods-besteffort-pod8b64c234_9ef4_4520_bc58_c5c9910e2b79.slice. Dec 13 01:28:28.425173 containerd[1717]: time="2024-12-13T01:28:28.425128966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf9dd,Uid:d3416b50-96e6-4c99-89f2-df38b369aa49,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:28.426173 containerd[1717]: time="2024-12-13T01:28:28.426113884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kb9pl,Uid:2d905420-9c02-456f-8155-0c05b7bba211,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:28.426574 containerd[1717]: time="2024-12-13T01:28:28.426302323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-v8rbn,Uid:8b64c234-9ef4-4520-bc58-c5c9910e2b79,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:28:28.426574 containerd[1717]: time="2024-12-13T01:28:28.426343603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-dxdjg,Uid:2b3430af-6f3d-4057-8b66-f5f006481739,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:28:28.426574 containerd[1717]: time="2024-12-13T01:28:28.426313443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d57c6fb5b-qv9gg,Uid:365724d2-08aa-4224-b178-802ca3c1363c,Namespace:calico-system,Attempt:0,}" Dec 13 01:28:28.502535 containerd[1717]: time="2024-12-13T01:28:28.502407115Z" level=info msg="shim disconnected" id=5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5 namespace=k8s.io Dec 13 01:28:28.502535 containerd[1717]: time="2024-12-13T01:28:28.502475395Z" level=warning msg="cleaning up after shim disconnected" id=5b149dbe08012c2243eb99100046a375964aee743646c8f927748c1321893ca5 namespace=k8s.io Dec 13 01:28:28.502535 containerd[1717]: time="2024-12-13T01:28:28.502483235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:28.703095 containerd[1717]: time="2024-12-13T01:28:28.702565712Z" level=error msg="Failed to destroy network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.703095 containerd[1717]: time="2024-12-13T01:28:28.702883672Z" level=error msg="encountered an error cleaning up failed sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.703095 containerd[1717]: time="2024-12-13T01:28:28.702932311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf9dd,Uid:d3416b50-96e6-4c99-89f2-df38b369aa49,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.704039 kubelet[3262]: E1213 01:28:28.703408 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.704039 kubelet[3262]: E1213 01:28:28.703501 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kf9dd" Dec 13 01:28:28.704039 kubelet[3262]: E1213 01:28:28.703522 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kf9dd" Dec 13 01:28:28.704366 kubelet[3262]: E1213 01:28:28.703600 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kf9dd_kube-system(d3416b50-96e6-4c99-89f2-df38b369aa49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kf9dd_kube-system(d3416b50-96e6-4c99-89f2-df38b369aa49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kf9dd" podUID="d3416b50-96e6-4c99-89f2-df38b369aa49" Dec 13 01:28:28.727242 containerd[1717]: time="2024-12-13T01:28:28.727095098Z" level=error msg="Failed to destroy network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.727912 containerd[1717]: time="2024-12-13T01:28:28.727757137Z" level=error msg="encountered an error cleaning up failed sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.727912 containerd[1717]: time="2024-12-13T01:28:28.727818776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kb9pl,Uid:2d905420-9c02-456f-8155-0c05b7bba211,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.728101 kubelet[3262]: E1213 01:28:28.728072 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.728167 kubelet[3262]: E1213 01:28:28.728158 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kb9pl" Dec 13 01:28:28.728291 kubelet[3262]: E1213 01:28:28.728180 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-kb9pl" Dec 13 01:28:28.728291 kubelet[3262]: E1213 01:28:28.728237 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-kb9pl_kube-system(2d905420-9c02-456f-8155-0c05b7bba211)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-kb9pl_kube-system(2d905420-9c02-456f-8155-0c05b7bba211)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kb9pl" podUID="2d905420-9c02-456f-8155-0c05b7bba211" Dec 13 01:28:28.741448 containerd[1717]: time="2024-12-13T01:28:28.740917987Z" level=error msg="Failed to destroy network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.741726 containerd[1717]: time="2024-12-13T01:28:28.741684266Z" level=error msg="encountered an error cleaning up failed sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.741867 containerd[1717]: time="2024-12-13T01:28:28.741745946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-v8rbn,Uid:8b64c234-9ef4-4520-bc58-c5c9910e2b79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.742075 kubelet[3262]: E1213 01:28:28.742051 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.742141 kubelet[3262]: E1213 01:28:28.742099 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" Dec 13 01:28:28.742141 kubelet[3262]: E1213 01:28:28.742119 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" Dec 13 01:28:28.742203 kubelet[3262]: E1213 01:28:28.742162 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b77b9bd95-v8rbn_calico-apiserver(8b64c234-9ef4-4520-bc58-c5c9910e2b79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b77b9bd95-v8rbn_calico-apiserver(8b64c234-9ef4-4520-bc58-c5c9910e2b79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" podUID="8b64c234-9ef4-4520-bc58-c5c9910e2b79" Dec 13 01:28:28.745752 containerd[1717]: time="2024-12-13T01:28:28.745712897Z" level=error msg="Failed to destroy network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.747033 containerd[1717]: time="2024-12-13T01:28:28.746587855Z" level=error msg="encountered an error cleaning up failed sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.747116 containerd[1717]: time="2024-12-13T01:28:28.747049054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-dxdjg,Uid:2b3430af-6f3d-4057-8b66-f5f006481739,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.747298 kubelet[3262]: E1213 01:28:28.747271 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.747356 kubelet[3262]: E1213 01:28:28.747322 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" Dec 13 01:28:28.747356 kubelet[3262]: E1213 01:28:28.747346 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" Dec 13 01:28:28.747505 kubelet[3262]: E1213 01:28:28.747396 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b77b9bd95-dxdjg_calico-apiserver(2b3430af-6f3d-4057-8b66-f5f006481739)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b77b9bd95-dxdjg_calico-apiserver(2b3430af-6f3d-4057-8b66-f5f006481739)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" podUID="2b3430af-6f3d-4057-8b66-f5f006481739" Dec 13 01:28:28.751638 containerd[1717]: time="2024-12-13T01:28:28.751601284Z" level=error msg="Failed to destroy network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.751932 containerd[1717]: time="2024-12-13T01:28:28.751903083Z" level=error msg="encountered an error cleaning up failed sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.751977 containerd[1717]: time="2024-12-13T01:28:28.751953883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d57c6fb5b-qv9gg,Uid:365724d2-08aa-4224-b178-802ca3c1363c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.752129 kubelet[3262]: E1213 01:28:28.752110 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:28.752182 kubelet[3262]: E1213 01:28:28.752148 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" Dec 13 01:28:28.752182 kubelet[3262]: E1213 01:28:28.752165 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" Dec 13 01:28:28.752275 kubelet[3262]: E1213 01:28:28.752216 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d57c6fb5b-qv9gg_calico-system(365724d2-08aa-4224-b178-802ca3c1363c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d57c6fb5b-qv9gg_calico-system(365724d2-08aa-4224-b178-802ca3c1363c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" podUID="365724d2-08aa-4224-b178-802ca3c1363c" Dec 13 01:28:29.285236 systemd[1]: Created slice kubepods-besteffort-podfa9deb93_de89_47ca_88fa_e0139fd8400e.slice - libcontainer container kubepods-besteffort-podfa9deb93_de89_47ca_88fa_e0139fd8400e.slice. Dec 13 01:28:29.287982 containerd[1717]: time="2024-12-13T01:28:29.287597698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-996pm,Uid:fa9deb93-de89-47ca-88fa-e0139fd8400e,Namespace:calico-system,Attempt:0,}" Dec 13 01:28:29.355027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465-shm.mount: Deactivated successfully. Dec 13 01:28:29.355142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12-shm.mount: Deactivated successfully. Dec 13 01:28:29.355213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5-shm.mount: Deactivated successfully. Dec 13 01:28:29.370021 containerd[1717]: time="2024-12-13T01:28:29.369964476Z" level=error msg="Failed to destroy network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.371883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8-shm.mount: Deactivated successfully. Dec 13 01:28:29.372185 containerd[1717]: time="2024-12-13T01:28:29.372129672Z" level=error msg="encountered an error cleaning up failed sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.372243 containerd[1717]: time="2024-12-13T01:28:29.372212791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-996pm,Uid:fa9deb93-de89-47ca-88fa-e0139fd8400e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.372454 kubelet[3262]: E1213 01:28:29.372412 3262 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.372538 kubelet[3262]: E1213 01:28:29.372476 3262 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:29.372538 kubelet[3262]: E1213 01:28:29.372496 3262 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-996pm" Dec 13 01:28:29.372628 kubelet[3262]: E1213 01:28:29.372553 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-996pm_calico-system(fa9deb93-de89-47ca-88fa-e0139fd8400e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-996pm_calico-system(fa9deb93-de89-47ca-88fa-e0139fd8400e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:29.389056 kubelet[3262]: I1213 01:28:29.389018 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:29.390338 containerd[1717]: time="2024-12-13T01:28:29.390295791Z" level=info msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" Dec 13 01:28:29.390743 kubelet[3262]: I1213 01:28:29.390625 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:29.392050 containerd[1717]: time="2024-12-13T01:28:29.392013428Z" level=info msg="Ensure that sandbox 5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465 in task-service has been cleanup successfully" Dec 13 01:28:29.393860 containerd[1717]: time="2024-12-13T01:28:29.393536224Z" level=info msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" Dec 13 01:28:29.393860 containerd[1717]: time="2024-12-13T01:28:29.393685184Z" level=info msg="Ensure that sandbox 52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12 in task-service has been cleanup successfully" Dec 13 01:28:29.399108 containerd[1717]: time="2024-12-13T01:28:29.399080812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:28:29.401330 kubelet[3262]: I1213 01:28:29.401299 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:29.402366 containerd[1717]: time="2024-12-13T01:28:29.402328605Z" level=info msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" Dec 13 01:28:29.404692 containerd[1717]: time="2024-12-13T01:28:29.404666840Z" level=info msg="Ensure that sandbox f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8 in task-service has been cleanup successfully" Dec 13 01:28:29.411988 kubelet[3262]: I1213 01:28:29.411889 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:29.412967 containerd[1717]: time="2024-12-13T01:28:29.412899141Z" level=info msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" Dec 13 01:28:29.413190 containerd[1717]: time="2024-12-13T01:28:29.413080621Z" level=info msg="Ensure that sandbox c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724 in task-service has been cleanup successfully" Dec 13 01:28:29.420515 kubelet[3262]: I1213 01:28:29.420483 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:29.426319 containerd[1717]: time="2024-12-13T01:28:29.426273512Z" level=info msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" Dec 13 01:28:29.427401 containerd[1717]: time="2024-12-13T01:28:29.427339949Z" level=info msg="Ensure that sandbox b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5 in task-service has been cleanup successfully" Dec 13 01:28:29.428123 kubelet[3262]: I1213 01:28:29.428026 3262 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:29.429718 containerd[1717]: time="2024-12-13T01:28:29.429346465Z" level=info msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" Dec 13 01:28:29.429718 containerd[1717]: time="2024-12-13T01:28:29.429518265Z" level=info msg="Ensure that sandbox ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a in task-service has been cleanup successfully" Dec 13 01:28:29.484644 containerd[1717]: time="2024-12-13T01:28:29.484596903Z" level=error msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" failed" error="failed to destroy network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.485115 kubelet[3262]: E1213 01:28:29.485087 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:29.485192 kubelet[3262]: E1213 01:28:29.485163 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8"} Dec 13 01:28:29.485230 kubelet[3262]: E1213 01:28:29.485200 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa9deb93-de89-47ca-88fa-e0139fd8400e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.485362 kubelet[3262]: E1213 01:28:29.485241 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa9deb93-de89-47ca-88fa-e0139fd8400e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-996pm" podUID="fa9deb93-de89-47ca-88fa-e0139fd8400e" Dec 13 01:28:29.489108 containerd[1717]: time="2024-12-13T01:28:29.489068453Z" level=error msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" failed" error="failed to destroy network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.489458 kubelet[3262]: E1213 01:28:29.489415 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:29.489517 kubelet[3262]: E1213 01:28:29.489469 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12"} Dec 13 01:28:29.489517 kubelet[3262]: E1213 01:28:29.489501 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d905420-9c02-456f-8155-0c05b7bba211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.489600 kubelet[3262]: E1213 01:28:29.489527 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d905420-9c02-456f-8155-0c05b7bba211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kb9pl" podUID="2d905420-9c02-456f-8155-0c05b7bba211" Dec 13 01:28:29.493531 containerd[1717]: time="2024-12-13T01:28:29.493221524Z" level=error msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" failed" error="failed to destroy network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.493615 kubelet[3262]: E1213 01:28:29.493398 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:29.493615 kubelet[3262]: E1213 01:28:29.493460 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724"} Dec 13 01:28:29.493615 kubelet[3262]: E1213 01:28:29.493540 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"365724d2-08aa-4224-b178-802ca3c1363c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.493615 kubelet[3262]: E1213 01:28:29.493569 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"365724d2-08aa-4224-b178-802ca3c1363c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" podUID="365724d2-08aa-4224-b178-802ca3c1363c" Dec 13 01:28:29.495160 containerd[1717]: time="2024-12-13T01:28:29.495118320Z" level=error msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" failed" error="failed to destroy network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.495318 kubelet[3262]: E1213 01:28:29.495265 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:29.495318 kubelet[3262]: E1213 01:28:29.495298 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465"} Dec 13 01:28:29.495465 kubelet[3262]: E1213 01:28:29.495330 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b64c234-9ef4-4520-bc58-c5c9910e2b79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.495465 kubelet[3262]: E1213 01:28:29.495356 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b64c234-9ef4-4520-bc58-c5c9910e2b79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" podUID="8b64c234-9ef4-4520-bc58-c5c9910e2b79" Dec 13 01:28:29.503232 containerd[1717]: time="2024-12-13T01:28:29.503183062Z" level=error msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" failed" error="failed to destroy network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.503371 kubelet[3262]: E1213 01:28:29.503344 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:29.503442 kubelet[3262]: E1213 01:28:29.503379 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a"} Dec 13 01:28:29.503490 kubelet[3262]: E1213 01:28:29.503474 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b3430af-6f3d-4057-8b66-f5f006481739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.503549 kubelet[3262]: E1213 01:28:29.503505 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b3430af-6f3d-4057-8b66-f5f006481739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" podUID="2b3430af-6f3d-4057-8b66-f5f006481739" Dec 13 01:28:29.506841 containerd[1717]: time="2024-12-13T01:28:29.506798774Z" level=error msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" failed" error="failed to destroy network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:28:29.507002 kubelet[3262]: E1213 01:28:29.506976 3262 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:29.507051 kubelet[3262]: E1213 01:28:29.507013 3262 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5"} Dec 13 01:28:29.507051 kubelet[3262]: E1213 01:28:29.507043 3262 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3416b50-96e6-4c99-89f2-df38b369aa49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:28:29.507125 kubelet[3262]: E1213 01:28:29.507069 3262 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3416b50-96e6-4c99-89f2-df38b369aa49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-kf9dd" podUID="d3416b50-96e6-4c99-89f2-df38b369aa49" Dec 13 01:28:33.569283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317506363.mount: Deactivated successfully. Dec 13 01:28:33.621103 containerd[1717]: time="2024-12-13T01:28:33.621044817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:33.623713 containerd[1717]: time="2024-12-13T01:28:33.623562652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:28:33.626290 containerd[1717]: time="2024-12-13T01:28:33.626234086Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:33.631027 containerd[1717]: time="2024-12-13T01:28:33.630966677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:33.631662 containerd[1717]: time="2024-12-13T01:28:33.631490316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.231354146s" Dec 13 01:28:33.631662 containerd[1717]: time="2024-12-13T01:28:33.631531516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:28:33.643866 containerd[1717]: time="2024-12-13T01:28:33.643735050Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:28:33.680626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629809103.mount: Deactivated successfully. Dec 13 01:28:33.699766 containerd[1717]: time="2024-12-13T01:28:33.699615376Z" level=info msg="CreateContainer within sandbox \"f1c244d15af981b17adc5043b37a75e0d1d780610eccb100a5b2151591c06e4b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5\"" Dec 13 01:28:33.700459 containerd[1717]: time="2024-12-13T01:28:33.700261894Z" level=info msg="StartContainer for \"c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5\"" Dec 13 01:28:33.728688 systemd[1]: Started cri-containerd-c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5.scope - libcontainer container c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5. Dec 13 01:28:33.760279 containerd[1717]: time="2024-12-13T01:28:33.760222691Z" level=info msg="StartContainer for \"c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5\" returns successfully" Dec 13 01:28:33.970806 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:28:33.971000 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:28:34.462514 kubelet[3262]: I1213 01:28:34.462474 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-l7d6h" podStartSLOduration=2.172951894 podStartE2EDuration="16.462412367s" podCreationTimestamp="2024-12-13 01:28:18 +0000 UTC" firstStartedPulling="2024-12-13 01:28:19.342304042 +0000 UTC m=+22.184039047" lastFinishedPulling="2024-12-13 01:28:33.631764515 +0000 UTC m=+36.473499520" observedRunningTime="2024-12-13 01:28:34.46083905 +0000 UTC m=+37.302574055" watchObservedRunningTime="2024-12-13 01:28:34.462412367 +0000 UTC m=+37.304147372" Dec 13 01:28:36.660906 kubelet[3262]: I1213 01:28:36.660861 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:40.280388 containerd[1717]: time="2024-12-13T01:28:40.279537436Z" level=info msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" Dec 13 01:28:40.280896 containerd[1717]: time="2024-12-13T01:28:40.280638714Z" level=info msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.345 [INFO][4626] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.346 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" iface="eth0" netns="/var/run/netns/cni-8844f96d-5209-104e-a2e2-c3c159e83908" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.346 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" iface="eth0" netns="/var/run/netns/cni-8844f96d-5209-104e-a2e2-c3c159e83908" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.346 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" iface="eth0" netns="/var/run/netns/cni-8844f96d-5209-104e-a2e2-c3c159e83908" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.346 [INFO][4626] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.346 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.373 [INFO][4638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.373 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.374 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.382 [WARNING][4638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.382 [INFO][4638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.383 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:40.387775 containerd[1717]: 2024-12-13 01:28:40.386 [INFO][4626] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:40.389925 systemd[1]: run-netns-cni\x2d8844f96d\x2d5209\x2d104e\x2da2e2\x2dc3c159e83908.mount: Deactivated successfully. Dec 13 01:28:40.390713 containerd[1717]: time="2024-12-13T01:28:40.390535980Z" level=info msg="TearDown network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" successfully" Dec 13 01:28:40.390713 containerd[1717]: time="2024-12-13T01:28:40.390568619Z" level=info msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" returns successfully" Dec 13 01:28:40.391867 containerd[1717]: time="2024-12-13T01:28:40.391505058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-996pm,Uid:fa9deb93-de89-47ca-88fa-e0139fd8400e,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.353 [INFO][4627] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.353 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" iface="eth0" netns="/var/run/netns/cni-f679dbb1-78db-02e8-0939-e2af7b65185b" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.354 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" iface="eth0" netns="/var/run/netns/cni-f679dbb1-78db-02e8-0939-e2af7b65185b" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.354 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" iface="eth0" netns="/var/run/netns/cni-f679dbb1-78db-02e8-0939-e2af7b65185b" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.354 [INFO][4627] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.354 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.377 [INFO][4642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.378 [INFO][4642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.384 [INFO][4642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.397 [WARNING][4642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.397 [INFO][4642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.398 [INFO][4642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:40.401458 containerd[1717]: 2024-12-13 01:28:40.400 [INFO][4627] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:40.401876 containerd[1717]: time="2024-12-13T01:28:40.401565198Z" level=info msg="TearDown network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" successfully" Dec 13 01:28:40.401876 containerd[1717]: time="2024-12-13T01:28:40.401584918Z" level=info msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" returns successfully" Dec 13 01:28:40.403423 containerd[1717]: time="2024-12-13T01:28:40.403392554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf9dd,Uid:d3416b50-96e6-4c99-89f2-df38b369aa49,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:40.404636 systemd[1]: run-netns-cni\x2df679dbb1\x2d78db\x2d02e8\x2d0939\x2de2af7b65185b.mount: Deactivated successfully. Dec 13 01:28:40.627460 systemd-networkd[1333]: cali270d47c135e: Link UP Dec 13 01:28:40.629737 systemd-networkd[1333]: cali270d47c135e: Gained carrier Dec 13 01:28:40.630777 systemd-networkd[1333]: cali6f88713d54d: Link UP Dec 13 01:28:40.631826 systemd-networkd[1333]: cali6f88713d54d: Gained carrier Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.478 [INFO][4651] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.493 [INFO][4651] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0 csi-node-driver- calico-system fa9deb93-de89-47ca-88fa-e0139fd8400e 772 0 2024-12-13 01:28:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 csi-node-driver-996pm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali270d47c135e [] []}} ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.493 [INFO][4651] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.531 [INFO][4674] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" HandleID="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.545 [INFO][4674] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" HandleID="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-d903163327", "pod":"csi-node-driver-996pm", "timestamp":"2024-12-13 01:28:40.531920703 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.545 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.546 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.546 [INFO][4674] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.548 [INFO][4674] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.555 [INFO][4674] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.560 [INFO][4674] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.567 [INFO][4674] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.569 [INFO][4674] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.569 [INFO][4674] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.572 [INFO][4674] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543 Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.579 [INFO][4674] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.586 [INFO][4674] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.193/26] block=192.168.108.192/26 handle="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.586 [INFO][4674] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.193/26] handle="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.586 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:40.653941 containerd[1717]: 2024-12-13 01:28:40.586 [INFO][4674] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.193/26] IPv6=[] ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" HandleID="k8s-pod-network.4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.589 [INFO][4651] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa9deb93-de89-47ca-88fa-e0139fd8400e", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"csi-node-driver-996pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali270d47c135e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.589 [INFO][4651] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.193/32] ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.589 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali270d47c135e ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.629 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.632 [INFO][4651] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa9deb93-de89-47ca-88fa-e0139fd8400e", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543", Pod:"csi-node-driver-996pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali270d47c135e", MAC:"ea:1b:e5:59:0d:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:40.655835 containerd[1717]: 2024-12-13 01:28:40.651 [INFO][4651] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543" Namespace="calico-system" Pod="csi-node-driver-996pm" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.500 [INFO][4666] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.515 [INFO][4666] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0 coredns-76f75df574- kube-system d3416b50-96e6-4c99-89f2-df38b369aa49 773 0 2024-12-13 01:28:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 coredns-76f75df574-kf9dd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f88713d54d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.515 [INFO][4666] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.558 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" HandleID="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.573 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" HandleID="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318d50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-d903163327", "pod":"coredns-76f75df574-kf9dd", "timestamp":"2024-12-13 01:28:40.558549691 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.573 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.587 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.587 [INFO][4679] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.588 [INFO][4679] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.593 [INFO][4679] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.596 [INFO][4679] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.598 [INFO][4679] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.600 [INFO][4679] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.600 [INFO][4679] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.601 [INFO][4679] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879 Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.607 [INFO][4679] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.613 [INFO][4679] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.194/26] block=192.168.108.192/26 handle="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.613 [INFO][4679] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.194/26] handle="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.613 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:40.657554 containerd[1717]: 2024-12-13 01:28:40.613 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.194/26] IPv6=[] ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" HandleID="k8s-pod-network.5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.615 [INFO][4666] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3416b50-96e6-4c99-89f2-df38b369aa49", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"coredns-76f75df574-kf9dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f88713d54d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.615 [INFO][4666] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.194/32] ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.615 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f88713d54d ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.631 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.632 [INFO][4666] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3416b50-96e6-4c99-89f2-df38b369aa49", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879", Pod:"coredns-76f75df574-kf9dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f88713d54d", MAC:"92:37:e7:db:aa:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:40.658009 containerd[1717]: 2024-12-13 01:28:40.655 [INFO][4666] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879" Namespace="kube-system" Pod="coredns-76f75df574-kf9dd" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:40.685812 containerd[1717]: time="2024-12-13T01:28:40.685704883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:40.685812 containerd[1717]: time="2024-12-13T01:28:40.685763243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:40.685812 containerd[1717]: time="2024-12-13T01:28:40.685779003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:40.686174 containerd[1717]: time="2024-12-13T01:28:40.685857442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:40.690086 containerd[1717]: time="2024-12-13T01:28:40.689975194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:40.691912 containerd[1717]: time="2024-12-13T01:28:40.691697591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:40.691912 containerd[1717]: time="2024-12-13T01:28:40.691718231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:40.691912 containerd[1717]: time="2024-12-13T01:28:40.691801951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:40.703609 systemd[1]: Started cri-containerd-5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879.scope - libcontainer container 5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879. Dec 13 01:28:40.710279 systemd[1]: Started cri-containerd-4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543.scope - libcontainer container 4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543. Dec 13 01:28:40.741987 containerd[1717]: time="2024-12-13T01:28:40.741939493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-996pm,Uid:fa9deb93-de89-47ca-88fa-e0139fd8400e,Namespace:calico-system,Attempt:1,} returns sandbox id \"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543\"" Dec 13 01:28:40.744070 containerd[1717]: time="2024-12-13T01:28:40.743871129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:28:40.746839 containerd[1717]: time="2024-12-13T01:28:40.746776963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kf9dd,Uid:d3416b50-96e6-4c99-89f2-df38b369aa49,Namespace:kube-system,Attempt:1,} returns sandbox id \"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879\"" Dec 13 01:28:40.749640 containerd[1717]: time="2024-12-13T01:28:40.749602158Z" level=info msg="CreateContainer within sandbox \"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:40.824183 containerd[1717]: time="2024-12-13T01:28:40.824140692Z" level=info msg="CreateContainer within sandbox \"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c8c59f224a3f16bbf0b8ac910017b1f97ef239042d08e113893daf12d2e7c75\"" Dec 13 01:28:40.825626 containerd[1717]: time="2024-12-13T01:28:40.825601329Z" level=info msg="StartContainer for \"1c8c59f224a3f16bbf0b8ac910017b1f97ef239042d08e113893daf12d2e7c75\"" Dec 13 01:28:40.848637 systemd[1]: Started cri-containerd-1c8c59f224a3f16bbf0b8ac910017b1f97ef239042d08e113893daf12d2e7c75.scope - libcontainer container 1c8c59f224a3f16bbf0b8ac910017b1f97ef239042d08e113893daf12d2e7c75. Dec 13 01:28:40.876162 containerd[1717]: time="2024-12-13T01:28:40.876103671Z" level=info msg="StartContainer for \"1c8c59f224a3f16bbf0b8ac910017b1f97ef239042d08e113893daf12d2e7c75\" returns successfully" Dec 13 01:28:41.512674 kubelet[3262]: I1213 01:28:41.512628 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kf9dd" podStartSLOduration=30.512578187 podStartE2EDuration="30.512578187s" podCreationTimestamp="2024-12-13 01:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:41.496965617 +0000 UTC m=+44.338700622" watchObservedRunningTime="2024-12-13 01:28:41.512578187 +0000 UTC m=+44.354313192" Dec 13 01:28:42.005570 systemd-networkd[1333]: cali6f88713d54d: Gained IPv6LL Dec 13 01:28:42.311803 update_engine[1683]: I20241213 01:28:42.311569 1683 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:28:42.311803 update_engine[1683]: I20241213 01:28:42.311614 1683 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:28:42.311803 update_engine[1683]: I20241213 01:28:42.311800 1683 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:28:42.313540 update_engine[1683]: I20241213 01:28:42.313508 1683 omaha_request_params.cc:62] Current group set to stable Dec 13 01:28:42.313702 update_engine[1683]: I20241213 01:28:42.313602 1683 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:28:42.313702 update_engine[1683]: I20241213 01:28:42.313611 1683 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:28:42.313702 update_engine[1683]: I20241213 01:28:42.313627 1683 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:28:42.313702 update_engine[1683]: I20241213 01:28:42.313657 1683 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:28:42.313797 update_engine[1683]: I20241213 01:28:42.313725 1683 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:28:42.313797 update_engine[1683]: I20241213 01:28:42.313735 1683 omaha_request_action.cc:272] Request: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: Dec 13 01:28:42.313797 update_engine[1683]: I20241213 01:28:42.313740 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:28:42.314174 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:28:42.316039 update_engine[1683]: I20241213 01:28:42.316004 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:28:42.316292 update_engine[1683]: I20241213 01:28:42.316262 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:28:42.325193 systemd-networkd[1333]: cali270d47c135e: Gained IPv6LL Dec 13 01:28:42.356883 update_engine[1683]: E20241213 01:28:42.356833 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:28:42.357177 update_engine[1683]: I20241213 01:28:42.357136 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:28:42.377009 containerd[1717]: time="2024-12-13T01:28:42.376945937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.380890 containerd[1717]: time="2024-12-13T01:28:42.380704130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:28:42.386048 containerd[1717]: time="2024-12-13T01:28:42.385988320Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.392004 containerd[1717]: time="2024-12-13T01:28:42.391945708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.393044 containerd[1717]: time="2024-12-13T01:28:42.392616707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.648711418s" Dec 13 01:28:42.393044 containerd[1717]: time="2024-12-13T01:28:42.392654587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:28:42.395583 containerd[1717]: time="2024-12-13T01:28:42.395535821Z" level=info msg="CreateContainer within sandbox \"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:28:42.426798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225897539.mount: Deactivated successfully. Dec 13 01:28:42.439010 containerd[1717]: time="2024-12-13T01:28:42.438953176Z" level=info msg="CreateContainer within sandbox \"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca57edc5745770706abd5aa07720d54eba9a4c94e57f41c2415f6784c67f9620\"" Dec 13 01:28:42.439655 containerd[1717]: time="2024-12-13T01:28:42.439597775Z" level=info msg="StartContainer for \"ca57edc5745770706abd5aa07720d54eba9a4c94e57f41c2415f6784c67f9620\"" Dec 13 01:28:42.470660 systemd[1]: Started cri-containerd-ca57edc5745770706abd5aa07720d54eba9a4c94e57f41c2415f6784c67f9620.scope - libcontainer container ca57edc5745770706abd5aa07720d54eba9a4c94e57f41c2415f6784c67f9620. Dec 13 01:28:42.502838 containerd[1717]: time="2024-12-13T01:28:42.502717412Z" level=info msg="StartContainer for \"ca57edc5745770706abd5aa07720d54eba9a4c94e57f41c2415f6784c67f9620\" returns successfully" Dec 13 01:28:42.504093 containerd[1717]: time="2024-12-13T01:28:42.503974009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:28:43.282185 containerd[1717]: time="2024-12-13T01:28:43.281588929Z" level=info msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" Dec 13 01:28:43.283852 containerd[1717]: time="2024-12-13T01:28:43.283625445Z" level=info msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" Dec 13 01:28:43.286018 containerd[1717]: time="2024-12-13T01:28:43.285307482Z" level=info msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" Dec 13 01:28:43.289107 containerd[1717]: time="2024-12-13T01:28:43.288763995Z" level=info msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.436 [INFO][4976] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.437 [INFO][4976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" iface="eth0" netns="/var/run/netns/cni-d3c822d3-e7c3-7356-5224-333dae42dfe6" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.437 [INFO][4976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" iface="eth0" netns="/var/run/netns/cni-d3c822d3-e7c3-7356-5224-333dae42dfe6" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.437 [INFO][4976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" iface="eth0" netns="/var/run/netns/cni-d3c822d3-e7c3-7356-5224-333dae42dfe6" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.437 [INFO][4976] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.437 [INFO][4976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.469 [INFO][5022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.469 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.469 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.487 [WARNING][5022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.487 [INFO][5022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.489 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.495135 containerd[1717]: 2024-12-13 01:28:43.493 [INFO][4976] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:43.497017 containerd[1717]: time="2024-12-13T01:28:43.495298232Z" level=info msg="TearDown network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" successfully" Dec 13 01:28:43.497017 containerd[1717]: time="2024-12-13T01:28:43.495327512Z" level=info msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" returns successfully" Dec 13 01:28:43.501456 containerd[1717]: time="2024-12-13T01:28:43.497734587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d57c6fb5b-qv9gg,Uid:365724d2-08aa-4224-b178-802ca3c1363c,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:43.499540 systemd[1]: run-netns-cni\x2dd3c822d3\x2de7c3\x2d7356\x2d5224\x2d333dae42dfe6.mount: Deactivated successfully. Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.428 [INFO][4989] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.429 [INFO][4989] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" iface="eth0" netns="/var/run/netns/cni-44b03d5b-5b78-76e0-16b0-431f833592a8" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.429 [INFO][4989] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" iface="eth0" netns="/var/run/netns/cni-44b03d5b-5b78-76e0-16b0-431f833592a8" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.429 [INFO][4989] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" iface="eth0" netns="/var/run/netns/cni-44b03d5b-5b78-76e0-16b0-431f833592a8" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.429 [INFO][4989] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.429 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.529 [INFO][5018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.530 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.530 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.551 [WARNING][5018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.551 [INFO][5018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.555 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.562733 containerd[1717]: 2024-12-13 01:28:43.559 [INFO][4989] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:43.566521 containerd[1717]: time="2024-12-13T01:28:43.566483573Z" level=info msg="TearDown network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" successfully" Dec 13 01:28:43.568960 systemd[1]: run-netns-cni\x2d44b03d5b\x2d5b78\x2d76e0\x2d16b0\x2d431f833592a8.mount: Deactivated successfully. Dec 13 01:28:43.571454 containerd[1717]: time="2024-12-13T01:28:43.571115283Z" level=info msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" returns successfully" Dec 13 01:28:43.582330 containerd[1717]: time="2024-12-13T01:28:43.581778983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-v8rbn,Uid:8b64c234-9ef4-4520-bc58-c5c9910e2b79,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.474 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.475 [INFO][4993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" iface="eth0" netns="/var/run/netns/cni-742190be-e273-2500-e747-85d9d74a62d1" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.475 [INFO][4993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" iface="eth0" netns="/var/run/netns/cni-742190be-e273-2500-e747-85d9d74a62d1" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.477 [INFO][4993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" iface="eth0" netns="/var/run/netns/cni-742190be-e273-2500-e747-85d9d74a62d1" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.477 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.477 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.618 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.618 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.619 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.636 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.637 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.652 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.659596 containerd[1717]: 2024-12-13 01:28:43.655 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:43.660852 containerd[1717]: time="2024-12-13T01:28:43.659764510Z" level=info msg="TearDown network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" successfully" Dec 13 01:28:43.660852 containerd[1717]: time="2024-12-13T01:28:43.659789510Z" level=info msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" returns successfully" Dec 13 01:28:43.660852 containerd[1717]: time="2024-12-13T01:28:43.660423389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kb9pl,Uid:2d905420-9c02-456f-8155-0c05b7bba211,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.480 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.482 [INFO][4994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" iface="eth0" netns="/var/run/netns/cni-7e614b9d-b85d-a856-f571-bd10df55bca8" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.484 [INFO][4994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" iface="eth0" netns="/var/run/netns/cni-7e614b9d-b85d-a856-f571-bd10df55bca8" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.487 [INFO][4994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" iface="eth0" netns="/var/run/netns/cni-7e614b9d-b85d-a856-f571-bd10df55bca8" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.487 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.487 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.624 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.624 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.652 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.666 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.666 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.668 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.674575 containerd[1717]: 2024-12-13 01:28:43.671 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:43.675126 containerd[1717]: time="2024-12-13T01:28:43.674997440Z" level=info msg="TearDown network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" successfully" Dec 13 01:28:43.675126 containerd[1717]: time="2024-12-13T01:28:43.675035080Z" level=info msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" returns successfully" Dec 13 01:28:43.676780 containerd[1717]: time="2024-12-13T01:28:43.676757597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-dxdjg,Uid:2b3430af-6f3d-4057-8b66-f5f006481739,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:43.754158 systemd-networkd[1333]: cali2304c40a647: Link UP Dec 13 01:28:43.755979 systemd-networkd[1333]: cali2304c40a647: Gained carrier Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.636 [INFO][5040] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.669 [INFO][5040] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0 calico-kube-controllers-6d57c6fb5b- calico-system 365724d2-08aa-4224-b178-802ca3c1363c 804 0 2024-12-13 01:28:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d57c6fb5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 calico-kube-controllers-6d57c6fb5b-qv9gg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2304c40a647 [] []}} ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.669 [INFO][5040] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.701 [INFO][5061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" HandleID="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.713 [INFO][5061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" HandleID="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-d903163327", "pod":"calico-kube-controllers-6d57c6fb5b-qv9gg", "timestamp":"2024-12-13 01:28:43.701587669 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.713 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.713 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.713 [INFO][5061] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.715 [INFO][5061] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.719 [INFO][5061] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.722 [INFO][5061] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.724 [INFO][5061] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.727 [INFO][5061] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.727 [INFO][5061] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.728 [INFO][5061] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6 Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.733 [INFO][5061] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.744 [INFO][5061] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.195/26] block=192.168.108.192/26 handle="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.744 [INFO][5061] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.195/26] handle="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.744 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.786053 containerd[1717]: 2024-12-13 01:28:43.744 [INFO][5061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.195/26] IPv6=[] ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" HandleID="k8s-pod-network.48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.749 [INFO][5040] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0", GenerateName:"calico-kube-controllers-6d57c6fb5b-", Namespace:"calico-system", SelfLink:"", UID:"365724d2-08aa-4224-b178-802ca3c1363c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d57c6fb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"calico-kube-controllers-6d57c6fb5b-qv9gg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2304c40a647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.750 [INFO][5040] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.195/32] ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.750 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2304c40a647 ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.757 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.759 [INFO][5040] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0", GenerateName:"calico-kube-controllers-6d57c6fb5b-", Namespace:"calico-system", SelfLink:"", UID:"365724d2-08aa-4224-b178-802ca3c1363c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d57c6fb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6", Pod:"calico-kube-controllers-6d57c6fb5b-qv9gg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2304c40a647", MAC:"b6:59:ea:d6:0e:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:43.786712 containerd[1717]: 2024-12-13 01:28:43.779 [INFO][5040] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6" Namespace="calico-system" Pod="calico-kube-controllers-6d57c6fb5b-qv9gg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:43.861829 containerd[1717]: time="2024-12-13T01:28:43.857998563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:43.861829 containerd[1717]: time="2024-12-13T01:28:43.858740521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:43.861829 containerd[1717]: time="2024-12-13T01:28:43.858862161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:43.861829 containerd[1717]: time="2024-12-13T01:28:43.859155921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:43.887596 systemd[1]: Started cri-containerd-48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6.scope - libcontainer container 48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6. Dec 13 01:28:43.895448 systemd-networkd[1333]: cali275e4638493: Link UP Dec 13 01:28:43.895628 systemd-networkd[1333]: cali275e4638493: Gained carrier Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.760 [INFO][5067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.788 [INFO][5067] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0 calico-apiserver-6b77b9bd95- calico-apiserver 8b64c234-9ef4-4520-bc58-c5c9910e2b79 803 0 2024-12-13 01:28:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b77b9bd95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 calico-apiserver-6b77b9bd95-v8rbn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali275e4638493 [] []}} ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.789 [INFO][5067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.819 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" HandleID="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.835 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" HandleID="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-d903163327", "pod":"calico-apiserver-6b77b9bd95-v8rbn", "timestamp":"2024-12-13 01:28:43.819881357 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.835 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.835 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.835 [INFO][5098] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.837 [INFO][5098] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.843 [INFO][5098] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.849 [INFO][5098] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.852 [INFO][5098] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.858 [INFO][5098] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.858 [INFO][5098] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.862 [INFO][5098] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.870 [INFO][5098] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.885 [INFO][5098] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.196/26] block=192.168.108.192/26 handle="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.885 [INFO][5098] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.196/26] handle="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.885 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:43.917926 containerd[1717]: 2024-12-13 01:28:43.885 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.196/26] IPv6=[] ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" HandleID="k8s-pod-network.9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.891 [INFO][5067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b64c234-9ef4-4520-bc58-c5c9910e2b79", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"calico-apiserver-6b77b9bd95-v8rbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275e4638493", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.891 [INFO][5067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.196/32] ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.891 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali275e4638493 ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.896 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.898 [INFO][5067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b64c234-9ef4-4520-bc58-c5c9910e2b79", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb", Pod:"calico-apiserver-6b77b9bd95-v8rbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275e4638493", MAC:"f6:cc:b8:28:5e:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:43.918544 containerd[1717]: 2024-12-13 01:28:43.915 [INFO][5067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-v8rbn" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:43.929599 kubelet[3262]: I1213 01:28:43.929052 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:43.985903 containerd[1717]: time="2024-12-13T01:28:43.985306954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:43.985903 containerd[1717]: time="2024-12-13T01:28:43.985367674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:43.985903 containerd[1717]: time="2024-12-13T01:28:43.985391314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:43.988339 containerd[1717]: time="2024-12-13T01:28:43.987413630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:44.042759 systemd[1]: Started cri-containerd-9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb.scope - libcontainer container 9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb. Dec 13 01:28:44.089552 containerd[1717]: time="2024-12-13T01:28:44.088323113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d57c6fb5b-qv9gg,Uid:365724d2-08aa-4224-b178-802ca3c1363c,Namespace:calico-system,Attempt:1,} returns sandbox id \"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6\"" Dec 13 01:28:44.130367 containerd[1717]: time="2024-12-13T01:28:44.130154551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-v8rbn,Uid:8b64c234-9ef4-4520-bc58-c5c9910e2b79,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb\"" Dec 13 01:28:44.174874 systemd-networkd[1333]: cali4dd514115d5: Link UP Dec 13 01:28:44.176260 systemd-networkd[1333]: cali4dd514115d5: Gained carrier Dec 13 01:28:44.197464 containerd[1717]: time="2024-12-13T01:28:44.196889941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:43.936 [INFO][5123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:43.994 [INFO][5123] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0 calico-apiserver-6b77b9bd95- calico-apiserver 2b3430af-6f3d-4057-8b66-f5f006481739 806 0 2024-12-13 01:28:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b77b9bd95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 calico-apiserver-6b77b9bd95-dxdjg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4dd514115d5 [] []}} ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:43.995 [INFO][5123] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.084 [INFO][5203] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" HandleID="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.111 [INFO][5203] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" HandleID="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000425a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-d903163327", "pod":"calico-apiserver-6b77b9bd95-dxdjg", "timestamp":"2024-12-13 01:28:44.084899919 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.111 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.113 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.113 [INFO][5203] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.119 [INFO][5203] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.130 [INFO][5203] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.140 [INFO][5203] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.143 [INFO][5203] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.146 [INFO][5203] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.146 [INFO][5203] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.148 [INFO][5203] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760 Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.157 [INFO][5203] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.167 [INFO][5203] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.197/26] block=192.168.108.192/26 handle="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.170 [INFO][5203] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.197/26] handle="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.170 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:44.198272 containerd[1717]: 2024-12-13 01:28:44.170 [INFO][5203] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.197/26] IPv6=[] ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" HandleID="k8s-pod-network.1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.172 [INFO][5123] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3430af-6f3d-4057-8b66-f5f006481739", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"calico-apiserver-6b77b9bd95-dxdjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dd514115d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.172 [INFO][5123] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.197/32] ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.172 [INFO][5123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dd514115d5 ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.176 [INFO][5123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.178 [INFO][5123] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3430af-6f3d-4057-8b66-f5f006481739", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760", Pod:"calico-apiserver-6b77b9bd95-dxdjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dd514115d5", MAC:"a6:44:07:31:ef:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:44.199117 containerd[1717]: 2024-12-13 01:28:44.195 [INFO][5123] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760" Namespace="calico-apiserver" Pod="calico-apiserver-6b77b9bd95-dxdjg" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:44.200153 containerd[1717]: time="2024-12-13T01:28:44.200105014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:28:44.205025 containerd[1717]: time="2024-12-13T01:28:44.204987365Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:44.219308 containerd[1717]: time="2024-12-13T01:28:44.219163497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:44.220437 containerd[1717]: time="2024-12-13T01:28:44.220336335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.716327806s" Dec 13 01:28:44.220437 containerd[1717]: time="2024-12-13T01:28:44.220374055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:28:44.222304 containerd[1717]: time="2024-12-13T01:28:44.222215331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:28:44.225279 containerd[1717]: time="2024-12-13T01:28:44.224745846Z" level=info msg="CreateContainer within sandbox \"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:28:44.230248 systemd-networkd[1333]: cali745984e1cbf: Link UP Dec 13 01:28:44.231538 systemd-networkd[1333]: cali745984e1cbf: Gained carrier Dec 13 01:28:44.245447 containerd[1717]: time="2024-12-13T01:28:44.244776527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:44.245447 containerd[1717]: time="2024-12-13T01:28:44.244833127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:44.245447 containerd[1717]: time="2024-12-13T01:28:44.244847647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:44.245447 containerd[1717]: time="2024-12-13T01:28:44.245040046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:43.911 [INFO][5111] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:43.958 [INFO][5111] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0 coredns-76f75df574- kube-system 2d905420-9c02-456f-8155-0c05b7bba211 805 0 2024-12-13 01:28:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-d903163327 coredns-76f75df574-kb9pl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali745984e1cbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:43.958 [INFO][5111] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.101 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" HandleID="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.125 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" HandleID="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-d903163327", "pod":"coredns-76f75df574-kb9pl", "timestamp":"2024-12-13 01:28:44.101666727 +0000 UTC"}, Hostname:"ci-4081.2.1-a-d903163327", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.125 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.170 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.170 [INFO][5190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-d903163327' Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.176 [INFO][5190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.186 [INFO][5190] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.197 [INFO][5190] ipam/ipam.go 489: Trying affinity for 192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.201 [INFO][5190] ipam/ipam.go 155: Attempting to load block cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.203 [INFO][5190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.108.192/26 host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.203 [INFO][5190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.108.192/26 handle="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.205 [INFO][5190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841 Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.209 [INFO][5190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.108.192/26 handle="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.218 [INFO][5190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.108.198/26] block=192.168.108.192/26 handle="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.218 [INFO][5190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.108.198/26] handle="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" host="ci-4081.2.1-a-d903163327" Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.218 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:44.252455 containerd[1717]: 2024-12-13 01:28:44.218 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.198/26] IPv6=[] ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" HandleID="k8s-pod-network.5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.223 [INFO][5111] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2d905420-9c02-456f-8155-0c05b7bba211", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"", Pod:"coredns-76f75df574-kb9pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali745984e1cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.223 [INFO][5111] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.108.198/32] ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.223 [INFO][5111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali745984e1cbf ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.232 [INFO][5111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.234 [INFO][5111] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2d905420-9c02-456f-8155-0c05b7bba211", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841", Pod:"coredns-76f75df574-kb9pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali745984e1cbf", MAC:"72:bb:27:e0:03:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:44.252959 containerd[1717]: 2024-12-13 01:28:44.250 [INFO][5111] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841" Namespace="kube-system" Pod="coredns-76f75df574-kb9pl" WorkloadEndpoint="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:44.272593 systemd[1]: Started cri-containerd-1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760.scope - libcontainer container 1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760. Dec 13 01:28:44.273954 containerd[1717]: time="2024-12-13T01:28:44.273863830Z" level=info msg="CreateContainer within sandbox \"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7728b7917392d6a61a044d64557964a68ec56f6532dad61b053ed8154326a572\"" Dec 13 01:28:44.275751 containerd[1717]: time="2024-12-13T01:28:44.275682027Z" level=info msg="StartContainer for \"7728b7917392d6a61a044d64557964a68ec56f6532dad61b053ed8154326a572\"" Dec 13 01:28:44.299139 containerd[1717]: time="2024-12-13T01:28:44.297229384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:44.299139 containerd[1717]: time="2024-12-13T01:28:44.297371544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:44.299139 containerd[1717]: time="2024-12-13T01:28:44.297409544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:44.299139 containerd[1717]: time="2024-12-13T01:28:44.298678302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:44.317095 systemd[1]: Started cri-containerd-5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841.scope - libcontainer container 5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841. Dec 13 01:28:44.329820 containerd[1717]: time="2024-12-13T01:28:44.329004882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b77b9bd95-dxdjg,Uid:2b3430af-6f3d-4057-8b66-f5f006481739,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760\"" Dec 13 01:28:44.369616 systemd[1]: Started cri-containerd-7728b7917392d6a61a044d64557964a68ec56f6532dad61b053ed8154326a572.scope - libcontainer container 7728b7917392d6a61a044d64557964a68ec56f6532dad61b053ed8154326a572. Dec 13 01:28:44.384827 containerd[1717]: time="2024-12-13T01:28:44.384696013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kb9pl,Uid:2d905420-9c02-456f-8155-0c05b7bba211,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841\"" Dec 13 01:28:44.394256 containerd[1717]: time="2024-12-13T01:28:44.394095075Z" level=info msg="CreateContainer within sandbox \"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:44.414349 containerd[1717]: time="2024-12-13T01:28:44.414294316Z" level=info msg="StartContainer for \"7728b7917392d6a61a044d64557964a68ec56f6532dad61b053ed8154326a572\" returns successfully" Dec 13 01:28:44.434469 containerd[1717]: time="2024-12-13T01:28:44.434384396Z" level=info msg="CreateContainer within sandbox \"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11bf3de2db98c8f2df31cdbc47f5d071886a0bd1d2a5fed2d038c94951568ed9\"" Dec 13 01:28:44.435452 containerd[1717]: time="2024-12-13T01:28:44.434956355Z" level=info msg="StartContainer for \"11bf3de2db98c8f2df31cdbc47f5d071886a0bd1d2a5fed2d038c94951568ed9\"" Dec 13 01:28:44.458617 systemd[1]: Started cri-containerd-11bf3de2db98c8f2df31cdbc47f5d071886a0bd1d2a5fed2d038c94951568ed9.scope - libcontainer container 11bf3de2db98c8f2df31cdbc47f5d071886a0bd1d2a5fed2d038c94951568ed9. Dec 13 01:28:44.484251 containerd[1717]: time="2024-12-13T01:28:44.484205939Z" level=info msg="StartContainer for \"11bf3de2db98c8f2df31cdbc47f5d071886a0bd1d2a5fed2d038c94951568ed9\" returns successfully" Dec 13 01:28:44.509567 systemd[1]: run-netns-cni\x2d7e614b9d\x2db85d\x2da856\x2df571\x2dbd10df55bca8.mount: Deactivated successfully. Dec 13 01:28:44.509661 systemd[1]: run-netns-cni\x2d742190be\x2de273\x2d2500\x2de747\x2d85d9d74a62d1.mount: Deactivated successfully. Dec 13 01:28:44.587329 kubelet[3262]: I1213 01:28:44.587264 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-996pm" podStartSLOduration=22.109524334 podStartE2EDuration="25.587221338s" podCreationTimestamp="2024-12-13 01:28:19 +0000 UTC" firstStartedPulling="2024-12-13 01:28:40.74320669 +0000 UTC m=+43.584941695" lastFinishedPulling="2024-12-13 01:28:44.220903614 +0000 UTC m=+47.062638699" observedRunningTime="2024-12-13 01:28:44.558260314 +0000 UTC m=+47.399995319" watchObservedRunningTime="2024-12-13 01:28:44.587221338 +0000 UTC m=+47.428956343" Dec 13 01:28:44.629283 kubelet[3262]: I1213 01:28:44.629232 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kb9pl" podStartSLOduration=33.629190816 podStartE2EDuration="33.629190816s" podCreationTimestamp="2024-12-13 01:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:44.60644878 +0000 UTC m=+47.448183825" watchObservedRunningTime="2024-12-13 01:28:44.629190816 +0000 UTC m=+47.470925821" Dec 13 01:28:44.872465 kernel: bpftool[5464]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:28:45.111064 systemd-networkd[1333]: vxlan.calico: Link UP Dec 13 01:28:45.111183 systemd-networkd[1333]: vxlan.calico: Gained carrier Dec 13 01:28:45.355351 kubelet[3262]: I1213 01:28:45.355226 3262 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:28:45.355351 kubelet[3262]: I1213 01:28:45.355263 3262 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:28:45.461630 systemd-networkd[1333]: cali2304c40a647: Gained IPv6LL Dec 13 01:28:45.844724 systemd-networkd[1333]: cali275e4638493: Gained IPv6LL Dec 13 01:28:45.908585 systemd-networkd[1333]: cali745984e1cbf: Gained IPv6LL Dec 13 01:28:46.100586 systemd-networkd[1333]: cali4dd514115d5: Gained IPv6LL Dec 13 01:28:46.548584 systemd-networkd[1333]: vxlan.calico: Gained IPv6LL Dec 13 01:28:48.085100 containerd[1717]: time="2024-12-13T01:28:48.085051655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:48.087538 containerd[1717]: time="2024-12-13T01:28:48.087467570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:28:48.091921 containerd[1717]: time="2024-12-13T01:28:48.091861721Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:48.096104 containerd[1717]: time="2024-12-13T01:28:48.096067793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:48.096960 containerd[1717]: time="2024-12-13T01:28:48.096577432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.874321821s" Dec 13 01:28:48.096960 containerd[1717]: time="2024-12-13T01:28:48.096611232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:28:48.099137 containerd[1717]: time="2024-12-13T01:28:48.098888307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:48.119677 containerd[1717]: time="2024-12-13T01:28:48.119631106Z" level=info msg="CreateContainer within sandbox \"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:28:48.168959 containerd[1717]: time="2024-12-13T01:28:48.168865127Z" level=info msg="CreateContainer within sandbox \"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a\"" Dec 13 01:28:48.170526 containerd[1717]: time="2024-12-13T01:28:48.169774445Z" level=info msg="StartContainer for \"91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a\"" Dec 13 01:28:48.207621 systemd[1]: Started cri-containerd-91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a.scope - libcontainer container 91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a. Dec 13 01:28:48.244840 containerd[1717]: time="2024-12-13T01:28:48.244802015Z" level=info msg="StartContainer for \"91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a\" returns successfully" Dec 13 01:28:48.601136 kubelet[3262]: I1213 01:28:48.600635 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d57c6fb5b-qv9gg" podStartSLOduration=25.594379298 podStartE2EDuration="29.600587101s" podCreationTimestamp="2024-12-13 01:28:19 +0000 UTC" firstStartedPulling="2024-12-13 01:28:44.091555666 +0000 UTC m=+46.933290671" lastFinishedPulling="2024-12-13 01:28:48.097763469 +0000 UTC m=+50.939498474" observedRunningTime="2024-12-13 01:28:48.598207186 +0000 UTC m=+51.439942191" watchObservedRunningTime="2024-12-13 01:28:48.600587101 +0000 UTC m=+51.442322106" Dec 13 01:28:49.103545 systemd[1]: run-containerd-runc-k8s.io-91b77f51d1c0a258f0294b1e5d8d5ac968e636d2272aeb4d43224f57da62573a-runc.R332fm.mount: Deactivated successfully. Dec 13 01:28:50.206294 containerd[1717]: time="2024-12-13T01:28:50.205605042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:50.208024 containerd[1717]: time="2024-12-13T01:28:50.207992237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:28:50.210792 containerd[1717]: time="2024-12-13T01:28:50.210670312Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:50.216579 containerd[1717]: time="2024-12-13T01:28:50.216533900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:50.218073 containerd[1717]: time="2024-12-13T01:28:50.217881298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.118918711s" Dec 13 01:28:50.218073 containerd[1717]: time="2024-12-13T01:28:50.218005977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:28:50.219124 containerd[1717]: time="2024-12-13T01:28:50.218679176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:50.220844 containerd[1717]: time="2024-12-13T01:28:50.220721172Z" level=info msg="CreateContainer within sandbox \"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:50.249507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384455813.mount: Deactivated successfully. Dec 13 01:28:50.257679 containerd[1717]: time="2024-12-13T01:28:50.257629018Z" level=info msg="CreateContainer within sandbox \"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5a0d713d28e403c88efb98f900fd17d965f80a8f03874e7a047d6a6c2665342e\"" Dec 13 01:28:50.258655 containerd[1717]: time="2024-12-13T01:28:50.258596776Z" level=info msg="StartContainer for \"5a0d713d28e403c88efb98f900fd17d965f80a8f03874e7a047d6a6c2665342e\"" Dec 13 01:28:50.288581 systemd[1]: Started cri-containerd-5a0d713d28e403c88efb98f900fd17d965f80a8f03874e7a047d6a6c2665342e.scope - libcontainer container 5a0d713d28e403c88efb98f900fd17d965f80a8f03874e7a047d6a6c2665342e. Dec 13 01:28:50.640675 containerd[1717]: time="2024-12-13T01:28:50.640608890Z" level=info msg="StartContainer for \"5a0d713d28e403c88efb98f900fd17d965f80a8f03874e7a047d6a6c2665342e\" returns successfully" Dec 13 01:28:51.424102 containerd[1717]: time="2024-12-13T01:28:51.424045879Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:51.426958 containerd[1717]: time="2024-12-13T01:28:51.426711793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:28:51.428844 containerd[1717]: time="2024-12-13T01:28:51.428819349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.210107693s" Dec 13 01:28:51.428961 containerd[1717]: time="2024-12-13T01:28:51.428943469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:28:51.433016 containerd[1717]: time="2024-12-13T01:28:51.432886501Z" level=info msg="CreateContainer within sandbox \"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:51.472799 containerd[1717]: time="2024-12-13T01:28:51.472646461Z" level=info msg="CreateContainer within sandbox \"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"089a31f1ff138eab0477a598588c2448480528d51d49431cd5b6434010ce1e67\"" Dec 13 01:28:51.474440 containerd[1717]: time="2024-12-13T01:28:51.473973898Z" level=info msg="StartContainer for \"089a31f1ff138eab0477a598588c2448480528d51d49431cd5b6434010ce1e67\"" Dec 13 01:28:51.519595 systemd[1]: Started cri-containerd-089a31f1ff138eab0477a598588c2448480528d51d49431cd5b6434010ce1e67.scope - libcontainer container 089a31f1ff138eab0477a598588c2448480528d51d49431cd5b6434010ce1e67. Dec 13 01:28:51.584389 containerd[1717]: time="2024-12-13T01:28:51.584352637Z" level=info msg="StartContainer for \"089a31f1ff138eab0477a598588c2448480528d51d49431cd5b6434010ce1e67\" returns successfully" Dec 13 01:28:51.652906 kubelet[3262]: I1213 01:28:51.652505 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:51.664319 kubelet[3262]: I1213 01:28:51.663635 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b77b9bd95-v8rbn" podStartSLOduration=27.579487965 podStartE2EDuration="33.663590318s" podCreationTimestamp="2024-12-13 01:28:18 +0000 UTC" firstStartedPulling="2024-12-13 01:28:44.134354703 +0000 UTC m=+46.976089708" lastFinishedPulling="2024-12-13 01:28:50.218457016 +0000 UTC m=+53.060192061" observedRunningTime="2024-12-13 01:28:50.663621044 +0000 UTC m=+53.505356049" watchObservedRunningTime="2024-12-13 01:28:51.663590318 +0000 UTC m=+54.505325323" Dec 13 01:28:52.310587 update_engine[1683]: I20241213 01:28:52.310468 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:28:52.310941 update_engine[1683]: I20241213 01:28:52.310786 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:28:52.311032 update_engine[1683]: I20241213 01:28:52.310998 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:28:52.342880 update_engine[1683]: E20241213 01:28:52.342815 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:28:52.343111 update_engine[1683]: I20241213 01:28:52.342903 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:28:52.654465 kubelet[3262]: I1213 01:28:52.654328 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:57.273838 containerd[1717]: time="2024-12-13T01:28:57.273771684Z" level=info msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.312 [WARNING][5723] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b64c234-9ef4-4520-bc58-c5c9910e2b79", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb", Pod:"calico-apiserver-6b77b9bd95-v8rbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275e4638493", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.312 [INFO][5723] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.312 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" iface="eth0" netns="" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.312 [INFO][5723] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.312 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.332 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.332 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.332 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.340 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.341 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.342 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.344953 containerd[1717]: 2024-12-13 01:28:57.343 [INFO][5723] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.345640 containerd[1717]: time="2024-12-13T01:28:57.345006745Z" level=info msg="TearDown network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" successfully" Dec 13 01:28:57.345640 containerd[1717]: time="2024-12-13T01:28:57.345044305Z" level=info msg="StopPodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" returns successfully" Dec 13 01:28:57.346565 containerd[1717]: time="2024-12-13T01:28:57.346490302Z" level=info msg="RemovePodSandbox for \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" Dec 13 01:28:57.346565 containerd[1717]: time="2024-12-13T01:28:57.346521422Z" level=info msg="Forcibly stopping sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\"" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.383 [WARNING][5750] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b64c234-9ef4-4520-bc58-c5c9910e2b79", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"9f3c173f175aebe887e8c0c8fe07753d06e84df6069683e75c714fed3011bcdb", Pod:"calico-apiserver-6b77b9bd95-v8rbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275e4638493", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.383 [INFO][5750] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.383 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" iface="eth0" netns="" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.383 [INFO][5750] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.383 [INFO][5750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.400 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.400 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.400 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.410 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.410 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" HandleID="k8s-pod-network.5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--v8rbn-eth0" Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.411 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.414788 containerd[1717]: 2024-12-13 01:28:57.413 [INFO][5750] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465" Dec 13 01:28:57.415751 containerd[1717]: time="2024-12-13T01:28:57.415261489Z" level=info msg="TearDown network for sandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" successfully" Dec 13 01:28:57.441348 containerd[1717]: time="2024-12-13T01:28:57.441184439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:57.441348 containerd[1717]: time="2024-12-13T01:28:57.441260159Z" level=info msg="RemovePodSandbox \"5f2394a6f6a6c18dbb7062a66ea6fe92a805cbab9746c6c1b2681c53e8789465\" returns successfully" Dec 13 01:28:57.442035 containerd[1717]: time="2024-12-13T01:28:57.441782678Z" level=info msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.477 [WARNING][5774] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3430af-6f3d-4057-8b66-f5f006481739", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760", Pod:"calico-apiserver-6b77b9bd95-dxdjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dd514115d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.477 [INFO][5774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.477 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" iface="eth0" netns="" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.477 [INFO][5774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.477 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.495 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.495 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.495 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.503 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.503 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.504 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.507364 containerd[1717]: 2024-12-13 01:28:57.506 [INFO][5774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.508643 containerd[1717]: time="2024-12-13T01:28:57.507837309Z" level=info msg="TearDown network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" successfully" Dec 13 01:28:57.508643 containerd[1717]: time="2024-12-13T01:28:57.507865989Z" level=info msg="StopPodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" returns successfully" Dec 13 01:28:57.508643 containerd[1717]: time="2024-12-13T01:28:57.508347708Z" level=info msg="RemovePodSandbox for \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" Dec 13 01:28:57.508643 containerd[1717]: time="2024-12-13T01:28:57.508373068Z" level=info msg="Forcibly stopping sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\"" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.546 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0", GenerateName:"calico-apiserver-6b77b9bd95-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3430af-6f3d-4057-8b66-f5f006481739", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b77b9bd95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"1eaf8382ecef22ef75d573fa553df557bef6e1fe2da455d644c405d1b237f760", Pod:"calico-apiserver-6b77b9bd95-dxdjg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dd514115d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.546 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.546 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" iface="eth0" netns="" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.546 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.546 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.563 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.563 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.563 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.572 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.572 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" HandleID="k8s-pod-network.ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Workload="ci--4081.2.1--a--d903163327-k8s-calico--apiserver--6b77b9bd95--dxdjg-eth0" Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.573 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.576513 containerd[1717]: 2024-12-13 01:28:57.575 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a" Dec 13 01:28:57.578531 containerd[1717]: time="2024-12-13T01:28:57.578098533Z" level=info msg="TearDown network for sandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" successfully" Dec 13 01:28:57.588075 containerd[1717]: time="2024-12-13T01:28:57.588025474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:57.588187 containerd[1717]: time="2024-12-13T01:28:57.588094034Z" level=info msg="RemovePodSandbox \"ab4f7ba1dee456d95157e1b1cd2bf6e8b85b39d7db58ee295ae01c52b4eb918a\" returns successfully" Dec 13 01:28:57.588827 containerd[1717]: time="2024-12-13T01:28:57.588537873Z" level=info msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.621 [WARNING][5823] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3416b50-96e6-4c99-89f2-df38b369aa49", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879", Pod:"coredns-76f75df574-kf9dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f88713d54d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.621 [INFO][5823] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.621 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" iface="eth0" netns="" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.621 [INFO][5823] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.621 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.641 [INFO][5829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.641 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.641 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.651 [WARNING][5829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.651 [INFO][5829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.652 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.654963 containerd[1717]: 2024-12-13 01:28:57.653 [INFO][5823] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.655370 containerd[1717]: time="2024-12-13T01:28:57.654989984Z" level=info msg="TearDown network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" successfully" Dec 13 01:28:57.655370 containerd[1717]: time="2024-12-13T01:28:57.655022144Z" level=info msg="StopPodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" returns successfully" Dec 13 01:28:57.655654 containerd[1717]: time="2024-12-13T01:28:57.655629503Z" level=info msg="RemovePodSandbox for \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" Dec 13 01:28:57.655722 containerd[1717]: time="2024-12-13T01:28:57.655675263Z" level=info msg="Forcibly stopping sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\"" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.694 [WARNING][5847] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3416b50-96e6-4c99-89f2-df38b369aa49", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5bbe31217e5fc54cfed9a1ad3873a160e21e6a2ce8f7c98bb63897acf8e33879", Pod:"coredns-76f75df574-kf9dd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f88713d54d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.694 [INFO][5847] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.694 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" iface="eth0" netns="" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.694 [INFO][5847] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.694 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.714 [INFO][5854] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.714 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.714 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.722 [WARNING][5854] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.722 [INFO][5854] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" HandleID="k8s-pod-network.b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kf9dd-eth0" Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.723 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.726681 containerd[1717]: 2024-12-13 01:28:57.725 [INFO][5847] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5" Dec 13 01:28:57.726681 containerd[1717]: time="2024-12-13T01:28:57.726641165Z" level=info msg="TearDown network for sandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" successfully" Dec 13 01:28:57.736590 containerd[1717]: time="2024-12-13T01:28:57.736544346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:57.736711 containerd[1717]: time="2024-12-13T01:28:57.736621426Z" level=info msg="RemovePodSandbox \"b910df2149d81c024dc5ee6d021cd2e002e9a06ae42b454320fc997dbe2439c5\" returns successfully" Dec 13 01:28:57.737508 containerd[1717]: time="2024-12-13T01:28:57.737250144Z" level=info msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.774 [WARNING][5872] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0", GenerateName:"calico-kube-controllers-6d57c6fb5b-", Namespace:"calico-system", SelfLink:"", UID:"365724d2-08aa-4224-b178-802ca3c1363c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d57c6fb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6", Pod:"calico-kube-controllers-6d57c6fb5b-qv9gg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2304c40a647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.774 [INFO][5872] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.774 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" iface="eth0" netns="" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.774 [INFO][5872] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.774 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.793 [INFO][5878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.794 [INFO][5878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.794 [INFO][5878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.801 [WARNING][5878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.802 [INFO][5878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.803 [INFO][5878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.806983 containerd[1717]: 2024-12-13 01:28:57.805 [INFO][5872] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.807505 containerd[1717]: time="2024-12-13T01:28:57.807024529Z" level=info msg="TearDown network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" successfully" Dec 13 01:28:57.807505 containerd[1717]: time="2024-12-13T01:28:57.807051609Z" level=info msg="StopPodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" returns successfully" Dec 13 01:28:57.807649 containerd[1717]: time="2024-12-13T01:28:57.807615008Z" level=info msg="RemovePodSandbox for \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" Dec 13 01:28:57.807681 containerd[1717]: time="2024-12-13T01:28:57.807651568Z" level=info msg="Forcibly stopping sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\"" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.844 [WARNING][5896] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0", GenerateName:"calico-kube-controllers-6d57c6fb5b-", Namespace:"calico-system", SelfLink:"", UID:"365724d2-08aa-4224-b178-802ca3c1363c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d57c6fb5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"48d8d0f28299f4df2d2e88af7f91cdb2bc87d5e69aed4dbc293183b0593896a6", Pod:"calico-kube-controllers-6d57c6fb5b-qv9gg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2304c40a647", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.845 [INFO][5896] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.845 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" iface="eth0" netns="" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.845 [INFO][5896] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.845 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.864 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.864 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.864 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.872 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.872 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" HandleID="k8s-pod-network.c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Workload="ci--4081.2.1--a--d903163327-k8s-calico--kube--controllers--6d57c6fb5b--qv9gg-eth0" Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.873 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.878445 containerd[1717]: 2024-12-13 01:28:57.874 [INFO][5896] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724" Dec 13 01:28:57.878445 containerd[1717]: time="2024-12-13T01:28:57.876192355Z" level=info msg="TearDown network for sandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" successfully" Dec 13 01:28:57.892953 containerd[1717]: time="2024-12-13T01:28:57.892912282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:57.893061 containerd[1717]: time="2024-12-13T01:28:57.892988842Z" level=info msg="RemovePodSandbox \"c797da68def4b3665ede3c6f19603ea3d6285c066247f285ef1803098c8a7724\" returns successfully" Dec 13 01:28:57.893519 containerd[1717]: time="2024-12-13T01:28:57.893495001Z" level=info msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.927 [WARNING][5920] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2d905420-9c02-456f-8155-0c05b7bba211", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841", Pod:"coredns-76f75df574-kb9pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali745984e1cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.928 [INFO][5920] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.928 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" iface="eth0" netns="" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.928 [INFO][5920] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.928 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.947 [INFO][5926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.947 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.947 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.957 [WARNING][5926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.957 [INFO][5926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.958 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:57.961941 containerd[1717]: 2024-12-13 01:28:57.960 [INFO][5920] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:57.962342 containerd[1717]: time="2024-12-13T01:28:57.961989268Z" level=info msg="TearDown network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" successfully" Dec 13 01:28:57.962342 containerd[1717]: time="2024-12-13T01:28:57.962015348Z" level=info msg="StopPodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" returns successfully" Dec 13 01:28:57.962879 containerd[1717]: time="2024-12-13T01:28:57.962850387Z" level=info msg="RemovePodSandbox for \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" Dec 13 01:28:57.962928 containerd[1717]: time="2024-12-13T01:28:57.962901067Z" level=info msg="Forcibly stopping sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\"" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:57.997 [WARNING][5945] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2d905420-9c02-456f-8155-0c05b7bba211", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"5e335674688dc72a2aeef39e65a8994679c5a5db9b8e79f2541c40170e26c841", Pod:"coredns-76f75df574-kb9pl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali745984e1cbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:57.998 [INFO][5945] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:57.998 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" iface="eth0" netns="" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:57.998 [INFO][5945] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:57.998 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.016 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.016 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.016 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.024 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.024 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" HandleID="k8s-pod-network.52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Workload="ci--4081.2.1--a--d903163327-k8s-coredns--76f75df574--kb9pl-eth0" Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.025 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:58.028537 containerd[1717]: 2024-12-13 01:28:58.026 [INFO][5945] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12" Dec 13 01:28:58.028963 containerd[1717]: time="2024-12-13T01:28:58.028573739Z" level=info msg="TearDown network for sandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" successfully" Dec 13 01:28:58.037700 containerd[1717]: time="2024-12-13T01:28:58.037651082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:58.038461 containerd[1717]: time="2024-12-13T01:28:58.037726561Z" level=info msg="RemovePodSandbox \"52cce7749c57162fac3328b851e799e437e9bbe7f6aedbae528eebad5c5dcb12\" returns successfully" Dec 13 01:28:58.038461 containerd[1717]: time="2024-12-13T01:28:58.038268360Z" level=info msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.070 [WARNING][5970] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa9deb93-de89-47ca-88fa-e0139fd8400e", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543", Pod:"csi-node-driver-996pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali270d47c135e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.070 [INFO][5970] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.070 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" iface="eth0" netns="" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.070 [INFO][5970] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.070 [INFO][5970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.089 [INFO][5976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.089 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.089 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.097 [WARNING][5976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.097 [INFO][5976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.098 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:58.101513 containerd[1717]: 2024-12-13 01:28:58.100 [INFO][5970] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.102148 containerd[1717]: time="2024-12-13T01:28:58.101550078Z" level=info msg="TearDown network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" successfully" Dec 13 01:28:58.102148 containerd[1717]: time="2024-12-13T01:28:58.101574998Z" level=info msg="StopPodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" returns successfully" Dec 13 01:28:58.102148 containerd[1717]: time="2024-12-13T01:28:58.102039797Z" level=info msg="RemovePodSandbox for \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" Dec 13 01:28:58.102148 containerd[1717]: time="2024-12-13T01:28:58.102067357Z" level=info msg="Forcibly stopping sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\"" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.140 [WARNING][5994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fa9deb93-de89-47ca-88fa-e0139fd8400e", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 28, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-d903163327", ContainerID:"4970ac25cd325b3ed4e0fc399388aa69d268cf99a1e07c2486d731786a77d543", Pod:"csi-node-driver-996pm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali270d47c135e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.140 [INFO][5994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.140 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" iface="eth0" netns="" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.140 [INFO][5994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.140 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.160 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.160 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.160 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.168 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.168 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" HandleID="k8s-pod-network.f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Workload="ci--4081.2.1--a--d903163327-k8s-csi--node--driver--996pm-eth0" Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.169 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:58.175322 containerd[1717]: 2024-12-13 01:28:58.171 [INFO][5994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8" Dec 13 01:28:58.175322 containerd[1717]: time="2024-12-13T01:28:58.174247377Z" level=info msg="TearDown network for sandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" successfully" Dec 13 01:28:58.182566 containerd[1717]: time="2024-12-13T01:28:58.182517160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:58.182802 containerd[1717]: time="2024-12-13T01:28:58.182587280Z" level=info msg="RemovePodSandbox \"f94963d98ae5c595ac445acf199c658cde82af2ed2ee98dab7905c49babfdcd8\" returns successfully" Dec 13 01:29:02.317496 update_engine[1683]: I20241213 01:29:02.317317 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:29:02.317811 update_engine[1683]: I20241213 01:29:02.317564 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:29:02.317811 update_engine[1683]: I20241213 01:29:02.317780 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:29:02.335145 update_engine[1683]: E20241213 01:29:02.335103 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:29:02.335212 update_engine[1683]: I20241213 01:29:02.335175 1683 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:29:06.736072 kubelet[3262]: I1213 01:29:06.736025 3262 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b77b9bd95-dxdjg" podStartSLOduration=41.63831416 podStartE2EDuration="48.735986911s" podCreationTimestamp="2024-12-13 01:28:18 +0000 UTC" firstStartedPulling="2024-12-13 01:28:44.331661917 +0000 UTC m=+47.173396922" lastFinishedPulling="2024-12-13 01:28:51.429334668 +0000 UTC m=+54.271069673" observedRunningTime="2024-12-13 01:28:51.664302997 +0000 UTC m=+54.506037962" watchObservedRunningTime="2024-12-13 01:29:06.735986911 +0000 UTC m=+69.577721876" Dec 13 01:29:08.174212 kubelet[3262]: I1213 01:29:08.174074 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:12.320165 update_engine[1683]: I20241213 01:29:12.319847 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:29:12.320165 update_engine[1683]: I20241213 01:29:12.320134 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:29:12.320538 update_engine[1683]: I20241213 01:29:12.320338 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:29:12.411395 update_engine[1683]: E20241213 01:29:12.411334 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:29:12.411550 update_engine[1683]: I20241213 01:29:12.411420 1683 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:29:12.411550 update_engine[1683]: I20241213 01:29:12.411450 1683 omaha_request_action.cc:617] Omaha request response: Dec 13 01:29:12.411550 update_engine[1683]: E20241213 01:29:12.411532 1683 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411549 1683 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411556 1683 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411561 1683 update_attempter.cc:306] Processing Done. Dec 13 01:29:12.411614 update_engine[1683]: E20241213 01:29:12.411574 1683 update_attempter.cc:619] Update failed. Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411579 1683 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411584 1683 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:29:12.411614 update_engine[1683]: I20241213 01:29:12.411589 1683 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:29:12.411768 update_engine[1683]: I20241213 01:29:12.411661 1683 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:29:12.411768 update_engine[1683]: I20241213 01:29:12.411682 1683 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:29:12.411768 update_engine[1683]: I20241213 01:29:12.411688 1683 omaha_request_action.cc:272] Request: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: Dec 13 01:29:12.411768 update_engine[1683]: I20241213 01:29:12.411705 1683 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:29:12.411940 update_engine[1683]: I20241213 01:29:12.411860 1683 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:29:12.412085 update_engine[1683]: I20241213 01:29:12.412051 1683 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:29:12.412273 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:29:12.429200 update_engine[1683]: E20241213 01:29:12.429143 1683 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429222 1683 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429232 1683 omaha_request_action.cc:617] Omaha request response: Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429239 1683 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429244 1683 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429251 1683 update_attempter.cc:306] Processing Done. Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429260 1683 update_attempter.cc:310] Error event sent. Dec 13 01:29:12.429309 update_engine[1683]: I20241213 01:29:12.429270 1683 update_check_scheduler.cc:74] Next update check in 49m5s Dec 13 01:29:12.429738 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:29:18.167572 kubelet[3262]: I1213 01:29:18.167160 3262 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:36.679245 systemd[1]: run-containerd-runc-k8s.io-c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5-runc.SPE4Zv.mount: Deactivated successfully. Dec 13 01:30:29.378691 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:58908.service - OpenSSH per-connection server daemon (10.200.16.10:58908). Dec 13 01:30:29.806529 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 58908 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:29.808844 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:29.813305 systemd-logind[1679]: New session 10 of user core. Dec 13 01:30:29.816585 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:30:30.240049 sshd[6252]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:30.242649 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:58908.service: Deactivated successfully. Dec 13 01:30:30.245066 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:30:30.247121 systemd-logind[1679]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:30:30.248940 systemd-logind[1679]: Removed session 10. Dec 13 01:30:35.325376 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:58912.service - OpenSSH per-connection server daemon (10.200.16.10:58912). Dec 13 01:30:35.775062 sshd[6266]: Accepted publickey for core from 10.200.16.10 port 58912 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:35.776469 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:35.780456 systemd-logind[1679]: New session 11 of user core. Dec 13 01:30:35.788582 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:30:36.162183 sshd[6266]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:36.165861 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:58912.service: Deactivated successfully. Dec 13 01:30:36.167689 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:30:36.168868 systemd-logind[1679]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:30:36.171341 systemd-logind[1679]: Removed session 11. Dec 13 01:30:41.254723 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:46568.service - OpenSSH per-connection server daemon (10.200.16.10:46568). Dec 13 01:30:41.691822 sshd[6302]: Accepted publickey for core from 10.200.16.10 port 46568 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:41.693197 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:41.697675 systemd-logind[1679]: New session 12 of user core. Dec 13 01:30:41.704618 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:30:42.069160 sshd[6302]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:42.072519 systemd-logind[1679]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:30:42.072770 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:46568.service: Deactivated successfully. Dec 13 01:30:42.075462 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:30:42.078527 systemd-logind[1679]: Removed session 12. Dec 13 01:30:42.154725 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:46572.service - OpenSSH per-connection server daemon (10.200.16.10:46572). Dec 13 01:30:42.586393 sshd[6318]: Accepted publickey for core from 10.200.16.10 port 46572 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:42.587858 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:42.592039 systemd-logind[1679]: New session 13 of user core. Dec 13 01:30:42.601664 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:30:43.007236 sshd[6318]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:43.010196 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:46572.service: Deactivated successfully. Dec 13 01:30:43.012246 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:30:43.013719 systemd-logind[1679]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:30:43.015293 systemd-logind[1679]: Removed session 13. Dec 13 01:30:43.091902 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:46580.service - OpenSSH per-connection server daemon (10.200.16.10:46580). Dec 13 01:30:43.521322 sshd[6329]: Accepted publickey for core from 10.200.16.10 port 46580 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:43.522848 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:43.527365 systemd-logind[1679]: New session 14 of user core. Dec 13 01:30:43.535683 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:30:43.913743 sshd[6329]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:43.918332 systemd-logind[1679]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:30:43.918954 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:46580.service: Deactivated successfully. Dec 13 01:30:43.921248 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:30:43.922442 systemd-logind[1679]: Removed session 14. Dec 13 01:30:48.999340 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:39020.service - OpenSSH per-connection server daemon (10.200.16.10:39020). Dec 13 01:30:49.425036 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 39020 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:49.426678 sshd[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:49.431243 systemd-logind[1679]: New session 15 of user core. Dec 13 01:30:49.436598 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:30:49.829653 sshd[6346]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:49.833346 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:39020.service: Deactivated successfully. Dec 13 01:30:49.836341 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:30:49.839423 systemd-logind[1679]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:30:49.840765 systemd-logind[1679]: Removed session 15. Dec 13 01:30:54.911818 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:39036.service - OpenSSH per-connection server daemon (10.200.16.10:39036). Dec 13 01:30:55.353061 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 39036 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:55.353959 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:55.357787 systemd-logind[1679]: New session 16 of user core. Dec 13 01:30:55.363592 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:30:55.728931 sshd[6359]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:55.731892 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:39036.service: Deactivated successfully. Dec 13 01:30:55.735042 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:30:55.736859 systemd-logind[1679]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:30:55.738206 systemd-logind[1679]: Removed session 16. Dec 13 01:31:00.814670 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:44054.service - OpenSSH per-connection server daemon (10.200.16.10:44054). Dec 13 01:31:01.246523 sshd[6393]: Accepted publickey for core from 10.200.16.10 port 44054 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:01.249050 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:01.254014 systemd-logind[1679]: New session 17 of user core. Dec 13 01:31:01.256589 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:01.620051 sshd[6393]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:01.623276 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:44054.service: Deactivated successfully. Dec 13 01:31:01.625176 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:01.625960 systemd-logind[1679]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:01.627155 systemd-logind[1679]: Removed session 17. Dec 13 01:31:01.704666 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:44068.service - OpenSSH per-connection server daemon (10.200.16.10:44068). Dec 13 01:31:02.152268 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 44068 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:02.153739 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:02.157996 systemd-logind[1679]: New session 18 of user core. Dec 13 01:31:02.166749 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:02.667584 sshd[6406]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:02.672674 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:44068.service: Deactivated successfully. Dec 13 01:31:02.674388 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:02.681024 systemd-logind[1679]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:02.684650 systemd-logind[1679]: Removed session 18. Dec 13 01:31:02.750172 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:44082.service - OpenSSH per-connection server daemon (10.200.16.10:44082). Dec 13 01:31:03.201002 sshd[6417]: Accepted publickey for core from 10.200.16.10 port 44082 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:03.202461 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:03.206509 systemd-logind[1679]: New session 19 of user core. Dec 13 01:31:03.211623 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:05.100124 sshd[6417]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:05.103651 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:44082.service: Deactivated successfully. Dec 13 01:31:05.106494 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:05.108195 systemd-logind[1679]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:05.109198 systemd-logind[1679]: Removed session 19. Dec 13 01:31:05.182214 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:44092.service - OpenSSH per-connection server daemon (10.200.16.10:44092). Dec 13 01:31:05.629537 sshd[6438]: Accepted publickey for core from 10.200.16.10 port 44092 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:05.630466 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:05.634340 systemd-logind[1679]: New session 20 of user core. Dec 13 01:31:05.641601 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:06.126171 sshd[6438]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:06.131874 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:44092.service: Deactivated successfully. Dec 13 01:31:06.132016 systemd-logind[1679]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:06.134996 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:06.136253 systemd-logind[1679]: Removed session 20. Dec 13 01:31:06.218747 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:44098.service - OpenSSH per-connection server daemon (10.200.16.10:44098). Dec 13 01:31:06.663259 sshd[6448]: Accepted publickey for core from 10.200.16.10 port 44098 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:06.665145 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:06.670646 systemd-logind[1679]: New session 21 of user core. Dec 13 01:31:06.676639 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:06.687556 systemd[1]: run-containerd-runc-k8s.io-c6843910f3379f4373c0df03ffda4d86997ded7082bc65450b6c16c80aa5f4b5-runc.nYfFQq.mount: Deactivated successfully. Dec 13 01:31:07.046159 sshd[6448]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:07.049189 systemd-logind[1679]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:07.049335 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:44098.service: Deactivated successfully. Dec 13 01:31:07.051731 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:07.053945 systemd-logind[1679]: Removed session 21. Dec 13 01:31:12.125796 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:45332.service - OpenSSH per-connection server daemon (10.200.16.10:45332). Dec 13 01:31:12.563194 sshd[6489]: Accepted publickey for core from 10.200.16.10 port 45332 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:12.564817 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:12.572764 systemd-logind[1679]: New session 22 of user core. Dec 13 01:31:12.580631 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:12.937649 sshd[6489]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:12.941273 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:45332.service: Deactivated successfully. Dec 13 01:31:12.943080 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:12.943846 systemd-logind[1679]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:12.945185 systemd-logind[1679]: Removed session 22. Dec 13 01:31:18.017160 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:45346.service - OpenSSH per-connection server daemon (10.200.16.10:45346). Dec 13 01:31:18.465881 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 45346 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:18.467310 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:18.472137 systemd-logind[1679]: New session 23 of user core. Dec 13 01:31:18.476579 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:18.850889 sshd[6505]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:18.853682 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:45346.service: Deactivated successfully. Dec 13 01:31:18.856707 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:18.859516 systemd-logind[1679]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:18.861156 systemd-logind[1679]: Removed session 23. Dec 13 01:31:23.933391 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:50844.service - OpenSSH per-connection server daemon (10.200.16.10:50844). Dec 13 01:31:24.362109 sshd[6523]: Accepted publickey for core from 10.200.16.10 port 50844 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:24.363519 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:24.367422 systemd-logind[1679]: New session 24 of user core. Dec 13 01:31:24.375590 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:31:24.751225 sshd[6523]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:24.754133 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:50844.service: Deactivated successfully. Dec 13 01:31:24.756232 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:31:24.757600 systemd-logind[1679]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:31:24.759018 systemd-logind[1679]: Removed session 24. Dec 13 01:31:29.842702 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:59758.service - OpenSSH per-connection server daemon (10.200.16.10:59758). Dec 13 01:31:30.286903 sshd[6580]: Accepted publickey for core from 10.200.16.10 port 59758 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:30.288591 sshd[6580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.292998 systemd-logind[1679]: New session 25 of user core. Dec 13 01:31:30.298648 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:31:30.671631 sshd[6580]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:30.675474 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:59758.service: Deactivated successfully. Dec 13 01:31:30.676194 systemd-logind[1679]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:31:30.680100 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:31:30.682605 systemd-logind[1679]: Removed session 25. Dec 13 01:31:35.762747 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:59772.service - OpenSSH per-connection server daemon (10.200.16.10:59772). Dec 13 01:31:36.195112 sshd[6593]: Accepted publickey for core from 10.200.16.10 port 59772 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:36.196772 sshd[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:36.201092 systemd-logind[1679]: New session 26 of user core. Dec 13 01:31:36.204622 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:31:36.575781 sshd[6593]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:36.578301 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:31:36.580061 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:59772.service: Deactivated successfully. Dec 13 01:31:36.583390 systemd-logind[1679]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:31:36.584508 systemd-logind[1679]: Removed session 26.